00:00:00.000 Started by upstream project "autotest-per-patch" build number 132752 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.018 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.019 The recommended git tool is: git 00:00:00.019 using credential 00000000-0000-0000-0000-000000000002 00:00:00.021 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.036 Fetching changes from the remote Git repository 00:00:00.042 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.064 Using shallow fetch with depth 1 00:00:00.064 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.064 > git --version # timeout=10 00:00:00.094 > git --version # 'git version 2.39.2' 00:00:00.094 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.147 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.147 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.007 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.019 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.031 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:03.031 > git config core.sparsecheckout # timeout=10 00:00:03.043 > git read-tree -mu HEAD # timeout=10 00:00:03.060 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:03.084 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:03.085 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:03.176 [Pipeline] Start of Pipeline 00:00:03.191 [Pipeline] library 00:00:03.193 Loading library shm_lib@master 00:00:03.193 Library shm_lib@master is cached. Copying from home. 00:00:03.208 [Pipeline] node 00:00:03.220 Running on VM-host-SM17 in /var/jenkins/workspace/raid-vg-autotest_2 00:00:03.221 [Pipeline] { 00:00:03.231 [Pipeline] catchError 00:00:03.233 [Pipeline] { 00:00:03.244 [Pipeline] wrap 00:00:03.253 [Pipeline] { 00:00:03.260 [Pipeline] stage 00:00:03.262 [Pipeline] { (Prologue) 00:00:03.278 [Pipeline] echo 00:00:03.279 Node: VM-host-SM17 00:00:03.285 [Pipeline] cleanWs 00:00:03.294 [WS-CLEANUP] Deleting project workspace... 00:00:03.294 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.299 [WS-CLEANUP] done 00:00:03.480 [Pipeline] setCustomBuildProperty 00:00:03.551 [Pipeline] httpRequest 00:00:03.978 [Pipeline] echo 00:00:03.981 Sorcerer 10.211.164.101 is alive 00:00:03.993 [Pipeline] retry 00:00:03.996 [Pipeline] { 00:00:04.013 [Pipeline] httpRequest 00:00:04.018 HttpMethod: GET 00:00:04.018 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.018 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.027 Response Code: HTTP/1.1 200 OK 00:00:04.027 Success: Status code 200 is in the accepted range: 200,404 00:00:04.028 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.180 [Pipeline] } 00:00:12.197 [Pipeline] // retry 00:00:12.204 [Pipeline] sh 00:00:12.498 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.513 [Pipeline] httpRequest 00:00:13.036 [Pipeline] echo 00:00:13.038 Sorcerer 10.211.164.101 is alive 00:00:13.048 [Pipeline] retry 00:00:13.050 [Pipeline] { 00:00:13.063 [Pipeline] httpRequest 00:00:13.068 HttpMethod: GET 00:00:13.069 URL: http://10.211.164.101/packages/spdk_60adca7e12093b49a1a2e4e9e2715651af6b93f2.tar.gz 00:00:13.069 Sending request to url: http://10.211.164.101/packages/spdk_60adca7e12093b49a1a2e4e9e2715651af6b93f2.tar.gz 00:00:13.089 Response Code: HTTP/1.1 200 OK 00:00:13.090 Success: Status code 200 is in the accepted range: 200,404 00:00:13.091 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/spdk_60adca7e12093b49a1a2e4e9e2715651af6b93f2.tar.gz 00:03:06.433 [Pipeline] } 00:03:06.455 [Pipeline] // retry 00:03:06.463 [Pipeline] sh 00:03:06.740 + tar --no-same-owner -xf spdk_60adca7e12093b49a1a2e4e9e2715651af6b93f2.tar.gz 00:03:10.032 [Pipeline] sh 00:03:10.310 + git -C spdk log --oneline -n5 00:03:10.310 60adca7e1 lib/mlx5: API to configure UMR 00:03:10.310 c2471e450 nvmf: Clean unassociated_qpairs on connect error 00:03:10.310 5469bd2d1 nvmf/rdma: Fix destroy of uninitialized qpair 00:03:10.310 c7acbd6be test/iscsi_tgt: Remove support for the namespace arg 00:03:10.310 a5e6ecf28 lib/reduce: Data copy logic in thin read operations 00:03:10.329 [Pipeline] writeFile 00:03:10.344 [Pipeline] sh 00:03:10.680 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:03:10.738 [Pipeline] sh 00:03:11.016 + cat autorun-spdk.conf 00:03:11.016 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:11.016 SPDK_RUN_ASAN=1 00:03:11.016 SPDK_RUN_UBSAN=1 00:03:11.016 SPDK_TEST_RAID=1 00:03:11.016 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:11.022 RUN_NIGHTLY=0 00:03:11.024 [Pipeline] } 00:03:11.038 [Pipeline] // stage 00:03:11.054 [Pipeline] stage 00:03:11.056 [Pipeline] { (Run VM) 00:03:11.069 [Pipeline] sh 00:03:11.347 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:03:11.347 + echo 'Start stage prepare_nvme.sh' 00:03:11.347 Start stage prepare_nvme.sh 00:03:11.347 + [[ -n 0 ]] 00:03:11.347 + disk_prefix=ex0 00:03:11.347 + [[ -n /var/jenkins/workspace/raid-vg-autotest_2 ]] 00:03:11.347 + [[ -e /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf ]] 00:03:11.347 + source /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf 00:03:11.347 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:11.347 ++ SPDK_RUN_ASAN=1 00:03:11.347 ++ SPDK_RUN_UBSAN=1 00:03:11.347 ++ SPDK_TEST_RAID=1 00:03:11.347 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:11.347 ++ RUN_NIGHTLY=0 00:03:11.347 + cd /var/jenkins/workspace/raid-vg-autotest_2 00:03:11.347 + nvme_files=() 00:03:11.347 + declare -A nvme_files 00:03:11.347 + backend_dir=/var/lib/libvirt/images/backends 00:03:11.347 + nvme_files['nvme.img']=5G 00:03:11.347 + nvme_files['nvme-cmb.img']=5G 00:03:11.347 + nvme_files['nvme-multi0.img']=4G 00:03:11.347 + nvme_files['nvme-multi1.img']=4G 00:03:11.347 + nvme_files['nvme-multi2.img']=4G 00:03:11.347 + nvme_files['nvme-openstack.img']=8G 00:03:11.347 + nvme_files['nvme-zns.img']=5G 00:03:11.347 + (( SPDK_TEST_NVME_PMR == 1 )) 00:03:11.347 + (( SPDK_TEST_FTL == 1 )) 00:03:11.347 + (( SPDK_TEST_NVME_FDP == 1 )) 00:03:11.347 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:03:11.347 + for nvme in "${!nvme_files[@]}" 00:03:11.347 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:03:11.347 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:03:11.347 + for nvme in "${!nvme_files[@]}" 00:03:11.347 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:03:11.347 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:03:11.347 + for nvme in "${!nvme_files[@]}" 00:03:11.347 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:03:11.347 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:03:11.347 + for nvme in "${!nvme_files[@]}" 00:03:11.347 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:03:11.347 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:03:11.347 + for nvme in "${!nvme_files[@]}" 00:03:11.347 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:03:11.347 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:03:11.347 + for nvme in "${!nvme_files[@]}" 00:03:11.347 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:03:11.348 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:03:11.348 + for nvme in "${!nvme_files[@]}" 00:03:11.348 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:03:11.348 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:03:11.348 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:03:11.348 + echo 'End stage prepare_nvme.sh' 00:03:11.348 End stage prepare_nvme.sh 00:03:11.359 [Pipeline] sh 00:03:11.637 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:03:11.637 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -H -a -v -f fedora39 00:03:11.637 00:03:11.637 DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant 00:03:11.637 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk 00:03:11.637 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest_2 00:03:11.637 HELP=0 00:03:11.637 DRY_RUN=0 00:03:11.637 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img, 00:03:11.637 NVME_DISKS_TYPE=nvme,nvme, 00:03:11.637 NVME_AUTO_CREATE=0 00:03:11.637 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img, 00:03:11.637 NVME_CMB=,, 00:03:11.637 NVME_PMR=,, 00:03:11.637 NVME_ZNS=,, 00:03:11.637 NVME_MS=,, 00:03:11.637 NVME_FDP=,, 00:03:11.637 SPDK_VAGRANT_DISTRO=fedora39 00:03:11.637 SPDK_VAGRANT_VMCPU=10 00:03:11.637 SPDK_VAGRANT_VMRAM=12288 00:03:11.637 SPDK_VAGRANT_PROVIDER=libvirt 00:03:11.637 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:03:11.637 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:03:11.637 SPDK_OPENSTACK_NETWORK=0 00:03:11.637 VAGRANT_PACKAGE_BOX=0 00:03:11.637 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:03:11.637 FORCE_DISTRO=true 00:03:11.637 VAGRANT_BOX_VERSION= 00:03:11.637 EXTRA_VAGRANTFILES= 00:03:11.638 NIC_MODEL=e1000 00:03:11.638 00:03:11.638 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt' 00:03:11.638 /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest_2 00:03:14.925 Bringing machine 'default' up with 'libvirt' provider... 00:03:15.184 ==> default: Creating image (snapshot of base box volume). 00:03:15.443 ==> default: Creating domain with the following settings... 00:03:15.443 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733508100_35103e2e239ded854433 00:03:15.443 ==> default: -- Domain type: kvm 00:03:15.443 ==> default: -- Cpus: 10 00:03:15.443 ==> default: -- Feature: acpi 00:03:15.443 ==> default: -- Feature: apic 00:03:15.443 ==> default: -- Feature: pae 00:03:15.443 ==> default: -- Memory: 12288M 00:03:15.443 ==> default: -- Memory Backing: hugepages: 00:03:15.443 ==> default: -- Management MAC: 00:03:15.443 ==> default: -- Loader: 00:03:15.443 ==> default: -- Nvram: 00:03:15.443 ==> default: -- Base box: spdk/fedora39 00:03:15.443 ==> default: -- Storage pool: default 00:03:15.443 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733508100_35103e2e239ded854433.img (20G) 00:03:15.443 ==> default: -- Volume Cache: default 00:03:15.443 ==> default: -- Kernel: 00:03:15.443 ==> default: -- Initrd: 00:03:15.443 ==> default: -- Graphics Type: vnc 00:03:15.443 ==> default: -- Graphics Port: -1 00:03:15.443 ==> default: -- Graphics IP: 127.0.0.1 00:03:15.443 ==> default: -- Graphics Password: Not defined 00:03:15.443 ==> default: -- Video Type: cirrus 00:03:15.443 ==> default: -- Video VRAM: 9216 00:03:15.443 ==> default: -- Sound Type: 00:03:15.443 ==> default: -- Keymap: en-us 00:03:15.443 ==> default: -- TPM Path: 00:03:15.443 ==> default: -- INPUT: type=mouse, bus=ps2 00:03:15.443 ==> default: -- Command line args: 00:03:15.443 ==> default: -> value=-device, 00:03:15.443 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:03:15.443 ==> default: -> value=-drive, 00:03:15.443 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:03:15.443 ==> default: -> value=-device, 00:03:15.443 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:15.443 ==> default: -> value=-device, 00:03:15.443 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:03:15.443 ==> default: -> value=-drive, 00:03:15.443 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:03:15.443 ==> default: -> value=-device, 00:03:15.443 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:15.443 ==> default: -> value=-drive, 00:03:15.443 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:03:15.443 ==> default: -> value=-device, 00:03:15.443 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:15.443 ==> default: -> value=-drive, 00:03:15.443 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:03:15.443 ==> default: -> value=-device, 00:03:15.444 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:15.702 ==> default: Creating shared folders metadata... 00:03:15.702 ==> default: Starting domain. 00:03:17.077 ==> default: Waiting for domain to get an IP address... 00:03:35.214 ==> default: Waiting for SSH to become available... 00:03:35.214 ==> default: Configuring and enabling network interfaces... 00:03:37.793 default: SSH address: 192.168.121.52:22 00:03:37.793 default: SSH username: vagrant 00:03:37.793 default: SSH auth method: private key 00:03:39.693 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:03:47.815 ==> default: Mounting SSHFS shared folder... 00:03:49.189 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:03:49.189 ==> default: Checking Mount.. 00:03:50.126 ==> default: Folder Successfully Mounted! 00:03:50.126 ==> default: Running provisioner: file... 00:03:51.059 default: ~/.gitconfig => .gitconfig 00:03:51.625 00:03:51.625 SUCCESS! 00:03:51.625 00:03:51.625 cd to /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:03:51.625 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:03:51.625 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:03:51.625 00:03:51.634 [Pipeline] } 00:03:51.649 [Pipeline] // stage 00:03:51.659 [Pipeline] dir 00:03:51.660 Running in /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt 00:03:51.662 [Pipeline] { 00:03:51.676 [Pipeline] catchError 00:03:51.678 [Pipeline] { 00:03:51.690 [Pipeline] sh 00:03:51.986 + vagrant ssh-config --host vagrant 00:03:51.986 + + sed -ne /^Host/,$p 00:03:51.986 tee ssh_conf 00:03:56.199 Host vagrant 00:03:56.199 HostName 192.168.121.52 00:03:56.199 User vagrant 00:03:56.199 Port 22 00:03:56.199 UserKnownHostsFile /dev/null 00:03:56.199 StrictHostKeyChecking no 00:03:56.199 PasswordAuthentication no 00:03:56.199 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:03:56.199 IdentitiesOnly yes 00:03:56.199 LogLevel FATAL 00:03:56.199 ForwardAgent yes 00:03:56.199 ForwardX11 yes 00:03:56.199 00:03:56.213 [Pipeline] withEnv 00:03:56.215 [Pipeline] { 00:03:56.228 [Pipeline] sh 00:03:56.505 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:03:56.505 source /etc/os-release 00:03:56.505 [[ -e /image.version ]] && img=$(< /image.version) 00:03:56.505 # Minimal, systemd-like check. 00:03:56.505 if [[ -e /.dockerenv ]]; then 00:03:56.505 # Clear garbage from the node's name: 00:03:56.505 # agt-er_autotest_547-896 -> autotest_547-896 00:03:56.505 # $HOSTNAME is the actual container id 00:03:56.505 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:03:56.505 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:03:56.505 # We can assume this is a mount from a host where container is running, 00:03:56.505 # so fetch its hostname to easily identify the target swarm worker. 00:03:56.505 container="$(< /etc/hostname) ($agent)" 00:03:56.505 else 00:03:56.505 # Fallback 00:03:56.505 container=$agent 00:03:56.505 fi 00:03:56.505 fi 00:03:56.505 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:03:56.505 00:03:56.516 [Pipeline] } 00:03:56.531 [Pipeline] // withEnv 00:03:56.539 [Pipeline] setCustomBuildProperty 00:03:56.554 [Pipeline] stage 00:03:56.557 [Pipeline] { (Tests) 00:03:56.575 [Pipeline] sh 00:03:56.872 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:03:57.145 [Pipeline] sh 00:03:57.425 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:03:57.700 [Pipeline] timeout 00:03:57.700 Timeout set to expire in 1 hr 30 min 00:03:57.703 [Pipeline] { 00:03:57.719 [Pipeline] sh 00:03:58.002 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:03:58.567 HEAD is now at 60adca7e1 lib/mlx5: API to configure UMR 00:03:58.579 [Pipeline] sh 00:03:58.860 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:03:59.132 [Pipeline] sh 00:03:59.411 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:03:59.730 [Pipeline] sh 00:04:00.039 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:04:00.039 ++ readlink -f spdk_repo 00:04:00.039 + DIR_ROOT=/home/vagrant/spdk_repo 00:04:00.039 + [[ -n /home/vagrant/spdk_repo ]] 00:04:00.039 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:04:00.039 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:04:00.039 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:04:00.039 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:04:00.039 + [[ -d /home/vagrant/spdk_repo/output ]] 00:04:00.039 + [[ raid-vg-autotest == pkgdep-* ]] 00:04:00.039 + cd /home/vagrant/spdk_repo 00:04:00.039 + source /etc/os-release 00:04:00.039 ++ NAME='Fedora Linux' 00:04:00.039 ++ VERSION='39 (Cloud Edition)' 00:04:00.039 ++ ID=fedora 00:04:00.039 ++ VERSION_ID=39 00:04:00.039 ++ VERSION_CODENAME= 00:04:00.039 ++ PLATFORM_ID=platform:f39 00:04:00.039 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:04:00.039 ++ ANSI_COLOR='0;38;2;60;110;180' 00:04:00.039 ++ LOGO=fedora-logo-icon 00:04:00.039 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:04:00.039 ++ HOME_URL=https://fedoraproject.org/ 00:04:00.039 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:04:00.039 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:04:00.039 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:04:00.039 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:04:00.039 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:04:00.039 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:04:00.039 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:04:00.039 ++ SUPPORT_END=2024-11-12 00:04:00.040 ++ VARIANT='Cloud Edition' 00:04:00.040 ++ VARIANT_ID=cloud 00:04:00.040 + uname -a 00:04:00.040 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:04:00.040 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:00.613 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:00.613 Hugepages 00:04:00.613 node hugesize free / total 00:04:00.613 node0 1048576kB 0 / 0 00:04:00.613 node0 2048kB 0 / 0 00:04:00.613 00:04:00.613 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:00.613 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:00.613 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:00.871 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:00.871 + rm -f /tmp/spdk-ld-path 00:04:00.871 + source autorun-spdk.conf 00:04:00.871 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:00.871 ++ SPDK_RUN_ASAN=1 00:04:00.871 ++ SPDK_RUN_UBSAN=1 00:04:00.871 ++ SPDK_TEST_RAID=1 00:04:00.871 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:00.871 ++ RUN_NIGHTLY=0 00:04:00.871 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:04:00.871 + [[ -n '' ]] 00:04:00.871 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:04:00.871 + for M in /var/spdk/build-*-manifest.txt 00:04:00.871 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:04:00.871 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:00.871 + for M in /var/spdk/build-*-manifest.txt 00:04:00.871 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:04:00.871 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:00.871 + for M in /var/spdk/build-*-manifest.txt 00:04:00.871 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:04:00.871 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:00.871 ++ uname 00:04:00.871 + [[ Linux == \L\i\n\u\x ]] 00:04:00.871 + sudo dmesg -T 00:04:00.871 + sudo dmesg --clear 00:04:00.871 + dmesg_pid=5205 00:04:00.871 + [[ Fedora Linux == FreeBSD ]] 00:04:00.871 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:00.871 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:00.871 + sudo dmesg -Tw 00:04:00.871 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:04:00.871 + [[ -x /usr/src/fio-static/fio ]] 00:04:00.871 + export FIO_BIN=/usr/src/fio-static/fio 00:04:00.871 + FIO_BIN=/usr/src/fio-static/fio 00:04:00.871 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:04:00.871 + [[ ! -v VFIO_QEMU_BIN ]] 00:04:00.871 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:04:00.871 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:00.871 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:00.871 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:04:00.871 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:00.871 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:00.871 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:00.871 18:02:26 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:04:00.871 18:02:26 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:00.871 18:02:26 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:00.871 18:02:26 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:04:00.871 18:02:26 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:04:00.871 18:02:26 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:04:00.871 18:02:26 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:00.871 18:02:26 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:04:00.871 18:02:26 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:04:00.871 18:02:26 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:01.130 18:02:26 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:04:01.130 18:02:26 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:01.130 18:02:26 -- scripts/common.sh@15 -- $ shopt -s extglob 00:04:01.130 18:02:26 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:04:01.130 18:02:26 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:01.130 18:02:26 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:01.130 18:02:26 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:01.130 18:02:26 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:01.130 18:02:26 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:01.130 18:02:26 -- paths/export.sh@5 -- $ export PATH 00:04:01.130 18:02:26 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:01.130 18:02:26 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:04:01.130 18:02:26 -- common/autobuild_common.sh@493 -- $ date +%s 00:04:01.130 18:02:26 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733508146.XXXXXX 00:04:01.130 18:02:26 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733508146.ZQpIuQ 00:04:01.130 18:02:26 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:04:01.130 18:02:26 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:04:01.130 18:02:26 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:04:01.130 18:02:26 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:04:01.130 18:02:26 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:04:01.130 18:02:26 -- common/autobuild_common.sh@509 -- $ get_config_params 00:04:01.130 18:02:26 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:04:01.130 18:02:26 -- common/autotest_common.sh@10 -- $ set +x 00:04:01.130 18:02:26 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:04:01.130 18:02:26 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:04:01.130 18:02:26 -- pm/common@17 -- $ local monitor 00:04:01.130 18:02:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:01.130 18:02:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:01.130 18:02:26 -- pm/common@25 -- $ sleep 1 00:04:01.130 18:02:26 -- pm/common@21 -- $ date +%s 00:04:01.130 18:02:26 -- pm/common@21 -- $ date +%s 00:04:01.130 18:02:26 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733508146 00:04:01.130 18:02:26 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733508146 00:04:01.130 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733508146_collect-cpu-load.pm.log 00:04:01.130 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733508146_collect-vmstat.pm.log 00:04:02.066 18:02:27 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:04:02.066 18:02:27 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:04:02.066 18:02:27 -- spdk/autobuild.sh@12 -- $ umask 022 00:04:02.066 18:02:27 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:04:02.066 18:02:27 -- spdk/autobuild.sh@16 -- $ date -u 00:04:02.066 Fri Dec 6 06:02:27 PM UTC 2024 00:04:02.066 18:02:27 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:04:02.066 v25.01-pre-307-g60adca7e1 00:04:02.066 18:02:27 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:04:02.066 18:02:27 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:04:02.066 18:02:27 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:02.066 18:02:27 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:02.066 18:02:27 -- common/autotest_common.sh@10 -- $ set +x 00:04:02.066 ************************************ 00:04:02.066 START TEST asan 00:04:02.066 ************************************ 00:04:02.066 using asan 00:04:02.066 18:02:27 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:04:02.066 00:04:02.066 real 0m0.000s 00:04:02.066 user 0m0.000s 00:04:02.066 sys 0m0.000s 00:04:02.066 18:02:27 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:02.066 18:02:27 asan -- common/autotest_common.sh@10 -- $ set +x 00:04:02.066 ************************************ 00:04:02.066 END TEST asan 00:04:02.066 ************************************ 00:04:02.066 18:02:27 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:04:02.066 18:02:27 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:04:02.066 18:02:27 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:02.066 18:02:27 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:02.066 18:02:27 -- common/autotest_common.sh@10 -- $ set +x 00:04:02.066 ************************************ 00:04:02.066 START TEST ubsan 00:04:02.066 ************************************ 00:04:02.066 using ubsan 00:04:02.066 18:02:27 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:04:02.066 00:04:02.066 real 0m0.000s 00:04:02.066 user 0m0.000s 00:04:02.066 sys 0m0.000s 00:04:02.066 18:02:27 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:02.066 18:02:27 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:04:02.066 ************************************ 00:04:02.066 END TEST ubsan 00:04:02.066 ************************************ 00:04:02.325 18:02:27 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:04:02.325 18:02:27 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:02.325 18:02:27 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:02.325 18:02:27 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:02.325 18:02:27 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:02.325 18:02:27 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:04:02.325 18:02:27 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:04:02.325 18:02:27 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:04:02.325 18:02:27 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:04:02.325 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:02.325 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:02.891 Using 'verbs' RDMA provider 00:04:18.708 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:30.912 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:30.912 Creating mk/config.mk...done. 00:04:30.912 Creating mk/cc.flags.mk...done. 00:04:30.912 Type 'make' to build. 00:04:30.912 18:02:55 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:04:30.912 18:02:55 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:30.912 18:02:55 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:30.912 18:02:55 -- common/autotest_common.sh@10 -- $ set +x 00:04:30.912 ************************************ 00:04:30.912 START TEST make 00:04:30.912 ************************************ 00:04:30.912 18:02:55 make -- common/autotest_common.sh@1129 -- $ make -j10 00:04:30.912 make[1]: Nothing to be done for 'all'. 00:04:45.809 The Meson build system 00:04:45.809 Version: 1.5.0 00:04:45.809 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:04:45.809 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:04:45.809 Build type: native build 00:04:45.809 Program cat found: YES (/usr/bin/cat) 00:04:45.809 Project name: DPDK 00:04:45.809 Project version: 24.03.0 00:04:45.809 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:45.809 C linker for the host machine: cc ld.bfd 2.40-14 00:04:45.809 Host machine cpu family: x86_64 00:04:45.809 Host machine cpu: x86_64 00:04:45.809 Message: ## Building in Developer Mode ## 00:04:45.809 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:45.809 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:04:45.809 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:45.809 Program python3 found: YES (/usr/bin/python3) 00:04:45.809 Program cat found: YES (/usr/bin/cat) 00:04:45.809 Compiler for C supports arguments -march=native: YES 00:04:45.809 Checking for size of "void *" : 8 00:04:45.809 Checking for size of "void *" : 8 (cached) 00:04:45.809 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:04:45.809 Library m found: YES 00:04:45.809 Library numa found: YES 00:04:45.809 Has header "numaif.h" : YES 00:04:45.809 Library fdt found: NO 00:04:45.809 Library execinfo found: NO 00:04:45.809 Has header "execinfo.h" : YES 00:04:45.809 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:45.810 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:45.810 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:45.810 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:45.810 Run-time dependency openssl found: YES 3.1.1 00:04:45.810 Run-time dependency libpcap found: YES 1.10.4 00:04:45.810 Has header "pcap.h" with dependency libpcap: YES 00:04:45.810 Compiler for C supports arguments -Wcast-qual: YES 00:04:45.810 Compiler for C supports arguments -Wdeprecated: YES 00:04:45.810 Compiler for C supports arguments -Wformat: YES 00:04:45.810 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:45.810 Compiler for C supports arguments -Wformat-security: NO 00:04:45.810 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:45.810 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:45.810 Compiler for C supports arguments -Wnested-externs: YES 00:04:45.810 Compiler for C supports arguments -Wold-style-definition: YES 00:04:45.810 Compiler for C supports arguments -Wpointer-arith: YES 00:04:45.810 Compiler for C supports arguments -Wsign-compare: YES 00:04:45.810 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:45.810 Compiler for C supports arguments -Wundef: YES 00:04:45.810 Compiler for C supports arguments -Wwrite-strings: YES 00:04:45.810 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:45.810 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:45.810 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:45.810 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:45.810 Program objdump found: YES (/usr/bin/objdump) 00:04:45.810 Compiler for C supports arguments -mavx512f: YES 00:04:45.810 Checking if "AVX512 checking" compiles: YES 00:04:45.810 Fetching value of define "__SSE4_2__" : 1 00:04:45.810 Fetching value of define "__AES__" : 1 00:04:45.810 Fetching value of define "__AVX__" : 1 00:04:45.810 Fetching value of define "__AVX2__" : 1 00:04:45.810 Fetching value of define "__AVX512BW__" : (undefined) 00:04:45.810 Fetching value of define "__AVX512CD__" : (undefined) 00:04:45.810 Fetching value of define "__AVX512DQ__" : (undefined) 00:04:45.810 Fetching value of define "__AVX512F__" : (undefined) 00:04:45.810 Fetching value of define "__AVX512VL__" : (undefined) 00:04:45.810 Fetching value of define "__PCLMUL__" : 1 00:04:45.810 Fetching value of define "__RDRND__" : 1 00:04:45.810 Fetching value of define "__RDSEED__" : 1 00:04:45.810 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:45.810 Fetching value of define "__znver1__" : (undefined) 00:04:45.810 Fetching value of define "__znver2__" : (undefined) 00:04:45.810 Fetching value of define "__znver3__" : (undefined) 00:04:45.810 Fetching value of define "__znver4__" : (undefined) 00:04:45.810 Library asan found: YES 00:04:45.810 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:45.810 Message: lib/log: Defining dependency "log" 00:04:45.810 Message: lib/kvargs: Defining dependency "kvargs" 00:04:45.810 Message: lib/telemetry: Defining dependency "telemetry" 00:04:45.810 Library rt found: YES 00:04:45.810 Checking for function "getentropy" : NO 00:04:45.810 Message: lib/eal: Defining dependency "eal" 00:04:45.810 Message: lib/ring: Defining dependency "ring" 00:04:45.810 Message: lib/rcu: Defining dependency "rcu" 00:04:45.810 Message: lib/mempool: Defining dependency "mempool" 00:04:45.810 Message: lib/mbuf: Defining dependency "mbuf" 00:04:45.810 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:45.810 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:04:45.810 Compiler for C supports arguments -mpclmul: YES 00:04:45.810 Compiler for C supports arguments -maes: YES 00:04:45.810 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:45.810 Compiler for C supports arguments -mavx512bw: YES 00:04:45.810 Compiler for C supports arguments -mavx512dq: YES 00:04:45.810 Compiler for C supports arguments -mavx512vl: YES 00:04:45.810 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:45.810 Compiler for C supports arguments -mavx2: YES 00:04:45.810 Compiler for C supports arguments -mavx: YES 00:04:45.810 Message: lib/net: Defining dependency "net" 00:04:45.810 Message: lib/meter: Defining dependency "meter" 00:04:45.810 Message: lib/ethdev: Defining dependency "ethdev" 00:04:45.810 Message: lib/pci: Defining dependency "pci" 00:04:45.810 Message: lib/cmdline: Defining dependency "cmdline" 00:04:45.810 Message: lib/hash: Defining dependency "hash" 00:04:45.810 Message: lib/timer: Defining dependency "timer" 00:04:45.810 Message: lib/compressdev: Defining dependency "compressdev" 00:04:45.810 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:45.810 Message: lib/dmadev: Defining dependency "dmadev" 00:04:45.810 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:45.810 Message: lib/power: Defining dependency "power" 00:04:45.810 Message: lib/reorder: Defining dependency "reorder" 00:04:45.810 Message: lib/security: Defining dependency "security" 00:04:45.810 Has header "linux/userfaultfd.h" : YES 00:04:45.810 Has header "linux/vduse.h" : YES 00:04:45.810 Message: lib/vhost: Defining dependency "vhost" 00:04:45.810 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:45.810 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:45.810 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:45.810 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:45.810 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:45.810 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:45.810 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:45.810 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:45.810 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:45.810 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:45.810 Program doxygen found: YES (/usr/local/bin/doxygen) 00:04:45.810 Configuring doxy-api-html.conf using configuration 00:04:45.810 Configuring doxy-api-man.conf using configuration 00:04:45.810 Program mandb found: YES (/usr/bin/mandb) 00:04:45.810 Program sphinx-build found: NO 00:04:45.810 Configuring rte_build_config.h using configuration 00:04:45.810 Message: 00:04:45.810 ================= 00:04:45.810 Applications Enabled 00:04:45.810 ================= 00:04:45.810 00:04:45.810 apps: 00:04:45.810 00:04:45.810 00:04:45.810 Message: 00:04:45.810 ================= 00:04:45.810 Libraries Enabled 00:04:45.810 ================= 00:04:45.810 00:04:45.810 libs: 00:04:45.810 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:45.810 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:45.810 cryptodev, dmadev, power, reorder, security, vhost, 00:04:45.810 00:04:45.810 Message: 00:04:45.810 =============== 00:04:45.810 Drivers Enabled 00:04:45.810 =============== 00:04:45.810 00:04:45.810 common: 00:04:45.810 00:04:45.810 bus: 00:04:45.810 pci, vdev, 00:04:45.810 mempool: 00:04:45.810 ring, 00:04:45.810 dma: 00:04:45.810 00:04:45.810 net: 00:04:45.810 00:04:45.810 crypto: 00:04:45.810 00:04:45.810 compress: 00:04:45.810 00:04:45.810 vdpa: 00:04:45.810 00:04:45.810 00:04:45.810 Message: 00:04:45.810 ================= 00:04:45.810 Content Skipped 00:04:45.810 ================= 00:04:45.810 00:04:45.810 apps: 00:04:45.810 dumpcap: explicitly disabled via build config 00:04:45.810 graph: explicitly disabled via build config 00:04:45.810 pdump: explicitly disabled via build config 00:04:45.810 proc-info: explicitly disabled via build config 00:04:45.810 test-acl: explicitly disabled via build config 00:04:45.810 test-bbdev: explicitly disabled via build config 00:04:45.810 test-cmdline: explicitly disabled via build config 00:04:45.810 test-compress-perf: explicitly disabled via build config 00:04:45.810 test-crypto-perf: explicitly disabled via build config 00:04:45.810 test-dma-perf: explicitly disabled via build config 00:04:45.810 test-eventdev: explicitly disabled via build config 00:04:45.811 test-fib: explicitly disabled via build config 00:04:45.811 test-flow-perf: explicitly disabled via build config 00:04:45.811 test-gpudev: explicitly disabled via build config 00:04:45.811 test-mldev: explicitly disabled via build config 00:04:45.811 test-pipeline: explicitly disabled via build config 00:04:45.811 test-pmd: explicitly disabled via build config 00:04:45.811 test-regex: explicitly disabled via build config 00:04:45.811 test-sad: explicitly disabled via build config 00:04:45.811 test-security-perf: explicitly disabled via build config 00:04:45.811 00:04:45.811 libs: 00:04:45.811 argparse: explicitly disabled via build config 00:04:45.811 metrics: explicitly disabled via build config 00:04:45.811 acl: explicitly disabled via build config 00:04:45.811 bbdev: explicitly disabled via build config 00:04:45.811 bitratestats: explicitly disabled via build config 00:04:45.811 bpf: explicitly disabled via build config 00:04:45.811 cfgfile: explicitly disabled via build config 00:04:45.811 distributor: explicitly disabled via build config 00:04:45.811 efd: explicitly disabled via build config 00:04:45.811 eventdev: explicitly disabled via build config 00:04:45.811 dispatcher: explicitly disabled via build config 00:04:45.811 gpudev: explicitly disabled via build config 00:04:45.811 gro: explicitly disabled via build config 00:04:45.811 gso: explicitly disabled via build config 00:04:45.811 ip_frag: explicitly disabled via build config 00:04:45.811 jobstats: explicitly disabled via build config 00:04:45.811 latencystats: explicitly disabled via build config 00:04:45.811 lpm: explicitly disabled via build config 00:04:45.811 member: explicitly disabled via build config 00:04:45.811 pcapng: explicitly disabled via build config 00:04:45.811 rawdev: explicitly disabled via build config 00:04:45.811 regexdev: explicitly disabled via build config 00:04:45.811 mldev: explicitly disabled via build config 00:04:45.811 rib: explicitly disabled via build config 00:04:45.811 sched: explicitly disabled via build config 00:04:45.811 stack: explicitly disabled via build config 00:04:45.811 ipsec: explicitly disabled via build config 00:04:45.811 pdcp: explicitly disabled via build config 00:04:45.811 fib: explicitly disabled via build config 00:04:45.811 port: explicitly disabled via build config 00:04:45.811 pdump: explicitly disabled via build config 00:04:45.811 table: explicitly disabled via build config 00:04:45.811 pipeline: explicitly disabled via build config 00:04:45.811 graph: explicitly disabled via build config 00:04:45.811 node: explicitly disabled via build config 00:04:45.811 00:04:45.811 drivers: 00:04:45.811 common/cpt: not in enabled drivers build config 00:04:45.811 common/dpaax: not in enabled drivers build config 00:04:45.811 common/iavf: not in enabled drivers build config 00:04:45.811 common/idpf: not in enabled drivers build config 00:04:45.811 common/ionic: not in enabled drivers build config 00:04:45.811 common/mvep: not in enabled drivers build config 00:04:45.811 common/octeontx: not in enabled drivers build config 00:04:45.811 bus/auxiliary: not in enabled drivers build config 00:04:45.811 bus/cdx: not in enabled drivers build config 00:04:45.811 bus/dpaa: not in enabled drivers build config 00:04:45.811 bus/fslmc: not in enabled drivers build config 00:04:45.811 bus/ifpga: not in enabled drivers build config 00:04:45.811 bus/platform: not in enabled drivers build config 00:04:45.811 bus/uacce: not in enabled drivers build config 00:04:45.811 bus/vmbus: not in enabled drivers build config 00:04:45.811 common/cnxk: not in enabled drivers build config 00:04:45.811 common/mlx5: not in enabled drivers build config 00:04:45.811 common/nfp: not in enabled drivers build config 00:04:45.811 common/nitrox: not in enabled drivers build config 00:04:45.811 common/qat: not in enabled drivers build config 00:04:45.811 common/sfc_efx: not in enabled drivers build config 00:04:45.811 mempool/bucket: not in enabled drivers build config 00:04:45.811 mempool/cnxk: not in enabled drivers build config 00:04:45.811 mempool/dpaa: not in enabled drivers build config 00:04:45.811 mempool/dpaa2: not in enabled drivers build config 00:04:45.811 mempool/octeontx: not in enabled drivers build config 00:04:45.811 mempool/stack: not in enabled drivers build config 00:04:45.811 dma/cnxk: not in enabled drivers build config 00:04:45.811 dma/dpaa: not in enabled drivers build config 00:04:45.811 dma/dpaa2: not in enabled drivers build config 00:04:45.811 dma/hisilicon: not in enabled drivers build config 00:04:45.811 dma/idxd: not in enabled drivers build config 00:04:45.811 dma/ioat: not in enabled drivers build config 00:04:45.811 dma/skeleton: not in enabled drivers build config 00:04:45.811 net/af_packet: not in enabled drivers build config 00:04:45.811 net/af_xdp: not in enabled drivers build config 00:04:45.811 net/ark: not in enabled drivers build config 00:04:45.811 net/atlantic: not in enabled drivers build config 00:04:45.811 net/avp: not in enabled drivers build config 00:04:45.811 net/axgbe: not in enabled drivers build config 00:04:45.811 net/bnx2x: not in enabled drivers build config 00:04:45.811 net/bnxt: not in enabled drivers build config 00:04:45.811 net/bonding: not in enabled drivers build config 00:04:45.811 net/cnxk: not in enabled drivers build config 00:04:45.811 net/cpfl: not in enabled drivers build config 00:04:45.811 net/cxgbe: not in enabled drivers build config 00:04:45.811 net/dpaa: not in enabled drivers build config 00:04:45.811 net/dpaa2: not in enabled drivers build config 00:04:45.811 net/e1000: not in enabled drivers build config 00:04:45.811 net/ena: not in enabled drivers build config 00:04:45.811 net/enetc: not in enabled drivers build config 00:04:45.811 net/enetfec: not in enabled drivers build config 00:04:45.811 net/enic: not in enabled drivers build config 00:04:45.811 net/failsafe: not in enabled drivers build config 00:04:45.811 net/fm10k: not in enabled drivers build config 00:04:45.811 net/gve: not in enabled drivers build config 00:04:45.811 net/hinic: not in enabled drivers build config 00:04:45.811 net/hns3: not in enabled drivers build config 00:04:45.811 net/i40e: not in enabled drivers build config 00:04:45.811 net/iavf: not in enabled drivers build config 00:04:45.811 net/ice: not in enabled drivers build config 00:04:45.811 net/idpf: not in enabled drivers build config 00:04:45.811 net/igc: not in enabled drivers build config 00:04:45.811 net/ionic: not in enabled drivers build config 00:04:45.811 net/ipn3ke: not in enabled drivers build config 00:04:45.811 net/ixgbe: not in enabled drivers build config 00:04:45.811 net/mana: not in enabled drivers build config 00:04:45.811 net/memif: not in enabled drivers build config 00:04:45.811 net/mlx4: not in enabled drivers build config 00:04:45.811 net/mlx5: not in enabled drivers build config 00:04:45.811 net/mvneta: not in enabled drivers build config 00:04:45.811 net/mvpp2: not in enabled drivers build config 00:04:45.811 net/netvsc: not in enabled drivers build config 00:04:45.811 net/nfb: not in enabled drivers build config 00:04:45.811 net/nfp: not in enabled drivers build config 00:04:45.811 net/ngbe: not in enabled drivers build config 00:04:45.811 net/null: not in enabled drivers build config 00:04:45.811 net/octeontx: not in enabled drivers build config 00:04:45.811 net/octeon_ep: not in enabled drivers build config 00:04:45.811 net/pcap: not in enabled drivers build config 00:04:45.811 net/pfe: not in enabled drivers build config 00:04:45.811 net/qede: not in enabled drivers build config 00:04:45.811 net/ring: not in enabled drivers build config 00:04:45.811 net/sfc: not in enabled drivers build config 00:04:45.811 net/softnic: not in enabled drivers build config 00:04:45.811 net/tap: not in enabled drivers build config 00:04:45.811 net/thunderx: not in enabled drivers build config 00:04:45.811 net/txgbe: not in enabled drivers build config 00:04:45.811 net/vdev_netvsc: not in enabled drivers build config 00:04:45.811 net/vhost: not in enabled drivers build config 00:04:45.811 net/virtio: not in enabled drivers build config 00:04:45.811 net/vmxnet3: not in enabled drivers build config 00:04:45.811 raw/*: missing internal dependency, "rawdev" 00:04:45.812 crypto/armv8: not in enabled drivers build config 00:04:45.812 crypto/bcmfs: not in enabled drivers build config 00:04:45.812 crypto/caam_jr: not in enabled drivers build config 00:04:45.812 crypto/ccp: not in enabled drivers build config 00:04:45.812 crypto/cnxk: not in enabled drivers build config 00:04:45.812 crypto/dpaa_sec: not in enabled drivers build config 00:04:45.812 crypto/dpaa2_sec: not in enabled drivers build config 00:04:45.812 crypto/ipsec_mb: not in enabled drivers build config 00:04:45.812 crypto/mlx5: not in enabled drivers build config 00:04:45.812 crypto/mvsam: not in enabled drivers build config 00:04:45.812 crypto/nitrox: not in enabled drivers build config 00:04:45.812 crypto/null: not in enabled drivers build config 00:04:45.812 crypto/octeontx: not in enabled drivers build config 00:04:45.812 crypto/openssl: not in enabled drivers build config 00:04:45.812 crypto/scheduler: not in enabled drivers build config 00:04:45.812 crypto/uadk: not in enabled drivers build config 00:04:45.812 crypto/virtio: not in enabled drivers build config 00:04:45.812 compress/isal: not in enabled drivers build config 00:04:45.812 compress/mlx5: not in enabled drivers build config 00:04:45.812 compress/nitrox: not in enabled drivers build config 00:04:45.812 compress/octeontx: not in enabled drivers build config 00:04:45.812 compress/zlib: not in enabled drivers build config 00:04:45.812 regex/*: missing internal dependency, "regexdev" 00:04:45.812 ml/*: missing internal dependency, "mldev" 00:04:45.812 vdpa/ifc: not in enabled drivers build config 00:04:45.812 vdpa/mlx5: not in enabled drivers build config 00:04:45.812 vdpa/nfp: not in enabled drivers build config 00:04:45.812 vdpa/sfc: not in enabled drivers build config 00:04:45.812 event/*: missing internal dependency, "eventdev" 00:04:45.812 baseband/*: missing internal dependency, "bbdev" 00:04:45.812 gpu/*: missing internal dependency, "gpudev" 00:04:45.812 00:04:45.812 00:04:45.812 Build targets in project: 85 00:04:45.812 00:04:45.812 DPDK 24.03.0 00:04:45.812 00:04:45.812 User defined options 00:04:45.812 buildtype : debug 00:04:45.812 default_library : shared 00:04:45.812 libdir : lib 00:04:45.812 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:45.812 b_sanitize : address 00:04:45.812 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:04:45.812 c_link_args : 00:04:45.812 cpu_instruction_set: native 00:04:45.812 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:04:45.812 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:04:45.812 enable_docs : false 00:04:45.812 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:04:45.812 enable_kmods : false 00:04:45.812 max_lcores : 128 00:04:45.812 tests : false 00:04:45.812 00:04:45.812 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:45.812 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:04:45.812 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:45.812 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:45.812 [3/268] Linking static target lib/librte_kvargs.a 00:04:45.812 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:45.812 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:45.812 [6/268] Linking static target lib/librte_log.a 00:04:45.812 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:45.812 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:45.812 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:46.071 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:46.071 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:46.071 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:46.071 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:46.071 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:46.071 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:46.330 [16/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:46.330 [17/268] Linking target lib/librte_log.so.24.1 00:04:46.330 [18/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:46.588 [19/268] Linking static target lib/librte_telemetry.a 00:04:46.588 [20/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:04:46.588 [21/268] Linking target lib/librte_kvargs.so.24.1 00:04:46.846 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:46.846 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:47.105 [24/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:04:47.105 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:47.105 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:47.105 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:47.105 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:47.364 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:47.364 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:47.364 [31/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:47.364 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:47.622 [33/268] Linking target lib/librte_telemetry.so.24.1 00:04:47.622 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:47.881 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:04:48.141 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:48.141 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:48.141 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:48.141 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:48.399 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:48.399 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:48.399 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:48.399 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:48.399 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:48.399 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:48.674 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:48.674 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:48.932 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:49.189 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:49.446 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:49.446 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:49.446 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:49.446 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:49.446 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:49.446 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:49.702 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:49.702 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:49.960 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:49.960 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:49.960 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:50.526 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:50.526 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:50.526 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:50.785 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:50.785 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:50.785 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:50.785 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:50.785 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:50.785 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:51.043 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:51.301 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:51.301 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:51.301 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:51.301 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:51.559 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:51.559 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:51.559 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:51.816 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:51.816 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:51.816 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:51.816 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:52.075 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:52.075 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:52.075 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:52.333 [85/268] Linking static target lib/librte_eal.a 00:04:52.333 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:52.591 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:52.591 [88/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:52.591 [89/268] Linking static target lib/librte_rcu.a 00:04:52.849 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:52.849 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:52.849 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:52.849 [93/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:52.849 [94/268] Linking static target lib/librte_ring.a 00:04:53.107 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:53.107 [96/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:53.107 [97/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:53.107 [98/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:53.107 [99/268] Linking static target lib/librte_mempool.a 00:04:53.365 [100/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:53.365 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:53.623 [102/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:53.623 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:53.623 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:53.623 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:53.882 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:53.882 [107/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:53.882 [108/268] Linking static target lib/librte_net.a 00:04:53.882 [109/268] Linking static target lib/librte_meter.a 00:04:54.450 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:54.450 [111/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:54.450 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:54.450 [113/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:54.450 [114/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:54.709 [115/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:54.709 [116/268] Linking static target lib/librte_mbuf.a 00:04:54.966 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:54.966 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:55.224 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:55.483 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:55.483 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:56.051 [122/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:56.051 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:56.309 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:56.309 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:56.309 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:56.309 [127/268] Linking static target lib/librte_pci.a 00:04:56.309 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:56.309 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:56.567 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:56.567 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:56.567 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:56.567 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:56.825 [134/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:56.825 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:56.825 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:56.825 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:56.825 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:56.825 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:56.825 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:57.083 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:57.083 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:57.083 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:57.083 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:57.341 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:57.341 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:57.600 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:57.600 [148/268] Linking static target lib/librte_cmdline.a 00:04:57.858 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:57.858 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:57.858 [151/268] Linking static target lib/librte_ethdev.a 00:04:58.117 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:58.376 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:58.376 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:58.376 [155/268] Linking static target lib/librte_timer.a 00:04:58.635 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:58.635 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:58.635 [158/268] Linking static target lib/librte_hash.a 00:04:58.635 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:58.894 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:58.894 [161/268] Linking static target lib/librte_compressdev.a 00:04:58.894 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:59.153 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:59.153 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:59.153 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:59.411 [166/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:59.670 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:59.670 [168/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:59.670 [169/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:59.670 [170/268] Linking static target lib/librte_dmadev.a 00:04:59.670 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:59.929 [172/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:59.929 [173/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:59.929 [174/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:00.189 [175/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:05:00.450 [176/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:00.450 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:05:00.450 [178/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:00.708 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:00.708 [180/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:00.708 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:00.708 [182/268] Linking static target lib/librte_cryptodev.a 00:05:00.708 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:05:00.966 [184/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:00.966 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:00.966 [186/268] Linking static target lib/librte_power.a 00:05:01.534 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:01.534 [188/268] Linking static target lib/librte_reorder.a 00:05:01.534 [189/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:01.534 [190/268] Linking static target lib/librte_security.a 00:05:01.792 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:01.792 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:01.792 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:02.049 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:02.308 [195/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:02.566 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:02.824 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:02.824 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:03.081 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:03.081 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:03.339 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:03.339 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:03.597 [203/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:03.597 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:03.854 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:03.854 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:04.112 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:04.112 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:04.112 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:04.112 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:04.112 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:04.369 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:04.369 [213/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:04.369 [214/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:04.369 [215/268] Linking static target drivers/librte_bus_pci.a 00:05:04.369 [216/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:04.369 [217/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:04.369 [218/268] Linking static target drivers/librte_bus_vdev.a 00:05:04.627 [219/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:04.627 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:04.627 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:04.885 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:04.885 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:04.885 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:04.885 [225/268] Linking static target drivers/librte_mempool_ring.a 00:05:04.885 [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:05.144 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:05.402 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:05.660 [229/268] Linking target lib/librte_eal.so.24.1 00:05:05.660 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:05:05.660 [231/268] Linking target lib/librte_pci.so.24.1 00:05:05.660 [232/268] Linking target lib/librte_ring.so.24.1 00:05:05.918 [233/268] Linking target drivers/librte_bus_vdev.so.24.1 00:05:05.918 [234/268] Linking target lib/librte_meter.so.24.1 00:05:05.918 [235/268] Linking target lib/librte_dmadev.so.24.1 00:05:05.918 [236/268] Linking target lib/librte_timer.so.24.1 00:05:05.918 [237/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:05.918 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:05:05.918 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:05:06.176 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:05:06.176 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:05:06.176 [242/268] Linking target lib/librte_rcu.so.24.1 00:05:06.176 [243/268] Linking target lib/librte_mempool.so.24.1 00:05:06.176 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:05:06.176 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:05:06.176 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:05:06.176 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:05:06.176 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:05:06.176 [249/268] Linking target lib/librte_mbuf.so.24.1 00:05:06.435 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:05:06.435 [251/268] Linking target lib/librte_cryptodev.so.24.1 00:05:06.435 [252/268] Linking target lib/librte_reorder.so.24.1 00:05:06.693 [253/268] Linking target lib/librte_net.so.24.1 00:05:06.693 [254/268] Linking target lib/librte_compressdev.so.24.1 00:05:06.693 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:05:06.693 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:05:06.693 [257/268] Linking target lib/librte_hash.so.24.1 00:05:06.693 [258/268] Linking target lib/librte_cmdline.so.24.1 00:05:06.693 [259/268] Linking target lib/librte_security.so.24.1 00:05:06.964 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:05:07.222 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:07.222 [262/268] Linking target lib/librte_ethdev.so.24.1 00:05:07.481 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:05:07.481 [264/268] Linking target lib/librte_power.so.24.1 00:05:10.762 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:10.762 [266/268] Linking static target lib/librte_vhost.a 00:05:12.136 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:12.136 [268/268] Linking target lib/librte_vhost.so.24.1 00:05:12.136 INFO: autodetecting backend as ninja 00:05:12.136 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:05:34.086 CC lib/ut/ut.o 00:05:34.086 CC lib/log/log.o 00:05:34.086 CC lib/log/log_deprecated.o 00:05:34.086 CC lib/log/log_flags.o 00:05:34.086 CC lib/ut_mock/mock.o 00:05:34.086 LIB libspdk_ut.a 00:05:34.086 SO libspdk_ut.so.2.0 00:05:34.086 LIB libspdk_log.a 00:05:34.086 LIB libspdk_ut_mock.a 00:05:34.086 SO libspdk_log.so.7.1 00:05:34.086 SO libspdk_ut_mock.so.6.0 00:05:34.086 SYMLINK libspdk_ut.so 00:05:34.086 SYMLINK libspdk_log.so 00:05:34.086 SYMLINK libspdk_ut_mock.so 00:05:34.086 CC lib/ioat/ioat.o 00:05:34.086 CXX lib/trace_parser/trace.o 00:05:34.086 CC lib/dma/dma.o 00:05:34.086 CC lib/util/base64.o 00:05:34.086 CC lib/util/bit_array.o 00:05:34.086 CC lib/util/cpuset.o 00:05:34.086 CC lib/util/crc32.o 00:05:34.086 CC lib/util/crc16.o 00:05:34.086 CC lib/util/crc32c.o 00:05:34.086 CC lib/vfio_user/host/vfio_user_pci.o 00:05:34.086 CC lib/vfio_user/host/vfio_user.o 00:05:34.086 CC lib/util/crc32_ieee.o 00:05:34.086 LIB libspdk_dma.a 00:05:34.086 CC lib/util/crc64.o 00:05:34.086 SO libspdk_dma.so.5.0 00:05:34.086 SYMLINK libspdk_dma.so 00:05:34.086 CC lib/util/dif.o 00:05:34.086 CC lib/util/fd.o 00:05:34.086 CC lib/util/fd_group.o 00:05:34.086 LIB libspdk_ioat.a 00:05:34.086 CC lib/util/file.o 00:05:34.086 SO libspdk_ioat.so.7.0 00:05:34.086 CC lib/util/hexlify.o 00:05:34.086 LIB libspdk_vfio_user.a 00:05:34.086 CC lib/util/iov.o 00:05:34.086 CC lib/util/math.o 00:05:34.086 CC lib/util/net.o 00:05:34.086 SYMLINK libspdk_ioat.so 00:05:34.086 SO libspdk_vfio_user.so.5.0 00:05:34.086 CC lib/util/pipe.o 00:05:34.086 SYMLINK libspdk_vfio_user.so 00:05:34.086 CC lib/util/strerror_tls.o 00:05:34.086 CC lib/util/string.o 00:05:34.086 CC lib/util/uuid.o 00:05:34.086 CC lib/util/xor.o 00:05:34.086 CC lib/util/zipf.o 00:05:34.086 CC lib/util/md5.o 00:05:34.666 LIB libspdk_util.a 00:05:34.666 SO libspdk_util.so.10.1 00:05:34.923 SYMLINK libspdk_util.so 00:05:34.923 LIB libspdk_trace_parser.a 00:05:34.923 SO libspdk_trace_parser.so.6.0 00:05:34.923 CC lib/idxd/idxd.o 00:05:34.923 CC lib/idxd/idxd_kernel.o 00:05:34.923 CC lib/idxd/idxd_user.o 00:05:34.923 CC lib/rdma_utils/rdma_utils.o 00:05:34.923 CC lib/json/json_util.o 00:05:34.923 CC lib/json/json_parse.o 00:05:34.923 CC lib/env_dpdk/env.o 00:05:35.182 CC lib/conf/conf.o 00:05:35.182 CC lib/vmd/vmd.o 00:05:35.182 SYMLINK libspdk_trace_parser.so 00:05:35.182 CC lib/vmd/led.o 00:05:35.182 CC lib/json/json_write.o 00:05:35.441 LIB libspdk_conf.a 00:05:35.441 LIB libspdk_rdma_utils.a 00:05:35.441 SO libspdk_conf.so.6.0 00:05:35.441 SO libspdk_rdma_utils.so.1.0 00:05:35.441 CC lib/env_dpdk/memory.o 00:05:35.441 CC lib/env_dpdk/pci.o 00:05:35.441 CC lib/env_dpdk/init.o 00:05:35.441 SYMLINK libspdk_conf.so 00:05:35.441 CC lib/env_dpdk/threads.o 00:05:35.441 SYMLINK libspdk_rdma_utils.so 00:05:35.441 CC lib/env_dpdk/pci_ioat.o 00:05:35.441 CC lib/env_dpdk/pci_virtio.o 00:05:35.699 CC lib/env_dpdk/pci_vmd.o 00:05:35.699 LIB libspdk_json.a 00:05:35.699 CC lib/env_dpdk/pci_idxd.o 00:05:35.699 SO libspdk_json.so.6.0 00:05:35.699 SYMLINK libspdk_json.so 00:05:35.699 CC lib/env_dpdk/pci_event.o 00:05:35.699 CC lib/env_dpdk/sigbus_handler.o 00:05:35.699 CC lib/env_dpdk/pci_dpdk.o 00:05:35.699 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:35.956 LIB libspdk_idxd.a 00:05:35.956 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:35.956 SO libspdk_idxd.so.12.1 00:05:35.956 SYMLINK libspdk_idxd.so 00:05:36.213 CC lib/rdma_provider/common.o 00:05:36.213 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:36.213 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:36.213 CC lib/jsonrpc/jsonrpc_server.o 00:05:36.213 CC lib/jsonrpc/jsonrpc_client.o 00:05:36.213 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:36.470 LIB libspdk_vmd.a 00:05:36.470 SO libspdk_vmd.so.6.0 00:05:36.470 LIB libspdk_rdma_provider.a 00:05:36.470 SYMLINK libspdk_vmd.so 00:05:36.470 SO libspdk_rdma_provider.so.7.0 00:05:36.470 LIB libspdk_jsonrpc.a 00:05:36.727 SYMLINK libspdk_rdma_provider.so 00:05:36.727 SO libspdk_jsonrpc.so.6.0 00:05:36.727 SYMLINK libspdk_jsonrpc.so 00:05:36.983 CC lib/rpc/rpc.o 00:05:37.240 LIB libspdk_rpc.a 00:05:37.240 LIB libspdk_env_dpdk.a 00:05:37.240 SO libspdk_rpc.so.6.0 00:05:37.240 SYMLINK libspdk_rpc.so 00:05:37.240 SO libspdk_env_dpdk.so.15.1 00:05:37.498 SYMLINK libspdk_env_dpdk.so 00:05:37.498 CC lib/trace/trace.o 00:05:37.498 CC lib/trace/trace_rpc.o 00:05:37.498 CC lib/trace/trace_flags.o 00:05:37.498 CC lib/keyring/keyring.o 00:05:37.498 CC lib/keyring/keyring_rpc.o 00:05:37.498 CC lib/notify/notify.o 00:05:37.498 CC lib/notify/notify_rpc.o 00:05:37.756 LIB libspdk_notify.a 00:05:37.756 SO libspdk_notify.so.6.0 00:05:37.756 SYMLINK libspdk_notify.so 00:05:38.014 LIB libspdk_keyring.a 00:05:38.014 LIB libspdk_trace.a 00:05:38.014 SO libspdk_keyring.so.2.0 00:05:38.014 SO libspdk_trace.so.11.0 00:05:38.014 SYMLINK libspdk_keyring.so 00:05:38.014 SYMLINK libspdk_trace.so 00:05:38.274 CC lib/thread/iobuf.o 00:05:38.274 CC lib/thread/thread.o 00:05:38.274 CC lib/sock/sock.o 00:05:38.274 CC lib/sock/sock_rpc.o 00:05:38.840 LIB libspdk_sock.a 00:05:38.840 SO libspdk_sock.so.10.0 00:05:39.099 SYMLINK libspdk_sock.so 00:05:39.358 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:39.358 CC lib/nvme/nvme_ctrlr.o 00:05:39.358 CC lib/nvme/nvme_fabric.o 00:05:39.358 CC lib/nvme/nvme_ns_cmd.o 00:05:39.358 CC lib/nvme/nvme_ns.o 00:05:39.358 CC lib/nvme/nvme_pcie_common.o 00:05:39.358 CC lib/nvme/nvme_qpair.o 00:05:39.358 CC lib/nvme/nvme_pcie.o 00:05:39.358 CC lib/nvme/nvme.o 00:05:40.291 CC lib/nvme/nvme_quirks.o 00:05:40.291 CC lib/nvme/nvme_transport.o 00:05:40.549 CC lib/nvme/nvme_discovery.o 00:05:40.549 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:40.549 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:40.807 CC lib/nvme/nvme_tcp.o 00:05:40.807 LIB libspdk_thread.a 00:05:41.066 SO libspdk_thread.so.11.0 00:05:41.066 CC lib/nvme/nvme_opal.o 00:05:41.066 SYMLINK libspdk_thread.so 00:05:41.066 CC lib/nvme/nvme_io_msg.o 00:05:41.324 CC lib/nvme/nvme_poll_group.o 00:05:41.324 CC lib/nvme/nvme_zns.o 00:05:41.324 CC lib/nvme/nvme_stubs.o 00:05:41.324 CC lib/accel/accel.o 00:05:41.324 CC lib/nvme/nvme_auth.o 00:05:41.583 CC lib/blob/blobstore.o 00:05:41.842 CC lib/blob/request.o 00:05:41.842 CC lib/blob/zeroes.o 00:05:42.101 CC lib/blob/blob_bs_dev.o 00:05:42.101 CC lib/nvme/nvme_cuse.o 00:05:42.101 CC lib/nvme/nvme_rdma.o 00:05:42.360 CC lib/init/json_config.o 00:05:42.360 CC lib/accel/accel_rpc.o 00:05:42.360 CC lib/virtio/virtio.o 00:05:42.619 CC lib/accel/accel_sw.o 00:05:42.619 CC lib/init/subsystem.o 00:05:42.619 CC lib/init/subsystem_rpc.o 00:05:42.877 CC lib/init/rpc.o 00:05:42.877 CC lib/virtio/virtio_vhost_user.o 00:05:42.877 CC lib/virtio/virtio_vfio_user.o 00:05:42.877 CC lib/virtio/virtio_pci.o 00:05:43.136 LIB libspdk_init.a 00:05:43.136 SO libspdk_init.so.6.0 00:05:43.136 LIB libspdk_accel.a 00:05:43.136 SO libspdk_accel.so.16.0 00:05:43.136 SYMLINK libspdk_init.so 00:05:43.136 CC lib/fsdev/fsdev.o 00:05:43.136 CC lib/fsdev/fsdev_io.o 00:05:43.136 CC lib/fsdev/fsdev_rpc.o 00:05:43.395 SYMLINK libspdk_accel.so 00:05:43.395 LIB libspdk_virtio.a 00:05:43.395 CC lib/event/app.o 00:05:43.395 CC lib/event/reactor.o 00:05:43.395 CC lib/event/app_rpc.o 00:05:43.395 CC lib/event/log_rpc.o 00:05:43.395 SO libspdk_virtio.so.7.0 00:05:43.395 CC lib/bdev/bdev.o 00:05:43.653 SYMLINK libspdk_virtio.so 00:05:43.653 CC lib/bdev/bdev_rpc.o 00:05:43.653 CC lib/event/scheduler_static.o 00:05:43.653 CC lib/bdev/bdev_zone.o 00:05:43.653 CC lib/bdev/part.o 00:05:43.967 CC lib/bdev/scsi_nvme.o 00:05:43.967 LIB libspdk_event.a 00:05:44.226 LIB libspdk_fsdev.a 00:05:44.226 SO libspdk_event.so.14.0 00:05:44.226 LIB libspdk_nvme.a 00:05:44.226 SO libspdk_fsdev.so.2.0 00:05:44.226 SYMLINK libspdk_event.so 00:05:44.226 SYMLINK libspdk_fsdev.so 00:05:44.483 SO libspdk_nvme.so.15.0 00:05:44.483 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:44.741 SYMLINK libspdk_nvme.so 00:05:45.306 LIB libspdk_fuse_dispatcher.a 00:05:45.306 SO libspdk_fuse_dispatcher.so.1.0 00:05:45.306 SYMLINK libspdk_fuse_dispatcher.so 00:05:46.240 LIB libspdk_blob.a 00:05:46.240 SO libspdk_blob.so.12.0 00:05:46.498 SYMLINK libspdk_blob.so 00:05:46.756 CC lib/blobfs/blobfs.o 00:05:46.756 CC lib/blobfs/tree.o 00:05:46.756 CC lib/lvol/lvol.o 00:05:47.322 LIB libspdk_bdev.a 00:05:47.581 SO libspdk_bdev.so.17.0 00:05:47.581 SYMLINK libspdk_bdev.so 00:05:47.839 CC lib/ublk/ublk.o 00:05:47.839 CC lib/ublk/ublk_rpc.o 00:05:47.839 CC lib/nbd/nbd.o 00:05:47.839 CC lib/nbd/nbd_rpc.o 00:05:47.839 CC lib/nvmf/ctrlr.o 00:05:47.839 CC lib/ftl/ftl_core.o 00:05:47.839 CC lib/nvmf/ctrlr_discovery.o 00:05:47.839 CC lib/scsi/dev.o 00:05:47.839 LIB libspdk_blobfs.a 00:05:47.839 SO libspdk_blobfs.so.11.0 00:05:48.098 LIB libspdk_lvol.a 00:05:48.098 SO libspdk_lvol.so.11.0 00:05:48.098 CC lib/nvmf/ctrlr_bdev.o 00:05:48.098 SYMLINK libspdk_blobfs.so 00:05:48.098 CC lib/nvmf/subsystem.o 00:05:48.098 SYMLINK libspdk_lvol.so 00:05:48.098 CC lib/nvmf/nvmf.o 00:05:48.098 CC lib/nvmf/nvmf_rpc.o 00:05:48.356 CC lib/scsi/lun.o 00:05:48.356 LIB libspdk_nbd.a 00:05:48.356 CC lib/ftl/ftl_init.o 00:05:48.356 SO libspdk_nbd.so.7.0 00:05:48.614 CC lib/nvmf/transport.o 00:05:48.614 SYMLINK libspdk_nbd.so 00:05:48.614 CC lib/nvmf/tcp.o 00:05:48.614 LIB libspdk_ublk.a 00:05:48.614 CC lib/scsi/port.o 00:05:48.614 SO libspdk_ublk.so.3.0 00:05:48.614 CC lib/ftl/ftl_layout.o 00:05:48.872 SYMLINK libspdk_ublk.so 00:05:48.872 CC lib/scsi/scsi.o 00:05:48.872 CC lib/nvmf/stubs.o 00:05:48.872 CC lib/nvmf/mdns_server.o 00:05:48.872 CC lib/scsi/scsi_bdev.o 00:05:49.130 CC lib/ftl/ftl_debug.o 00:05:49.130 CC lib/ftl/ftl_io.o 00:05:49.389 CC lib/ftl/ftl_sb.o 00:05:49.389 CC lib/ftl/ftl_l2p.o 00:05:49.389 CC lib/scsi/scsi_pr.o 00:05:49.389 CC lib/ftl/ftl_l2p_flat.o 00:05:49.663 CC lib/scsi/scsi_rpc.o 00:05:49.663 CC lib/scsi/task.o 00:05:49.663 CC lib/nvmf/rdma.o 00:05:49.663 CC lib/ftl/ftl_nv_cache.o 00:05:49.663 CC lib/ftl/ftl_band.o 00:05:49.663 CC lib/ftl/ftl_band_ops.o 00:05:49.663 CC lib/nvmf/auth.o 00:05:49.927 CC lib/ftl/ftl_writer.o 00:05:49.927 CC lib/ftl/ftl_rq.o 00:05:49.927 LIB libspdk_scsi.a 00:05:49.927 SO libspdk_scsi.so.9.0 00:05:50.185 CC lib/ftl/ftl_reloc.o 00:05:50.185 SYMLINK libspdk_scsi.so 00:05:50.185 CC lib/ftl/ftl_l2p_cache.o 00:05:50.185 CC lib/ftl/ftl_p2l.o 00:05:50.185 CC lib/ftl/ftl_p2l_log.o 00:05:50.443 CC lib/iscsi/conn.o 00:05:50.443 CC lib/ftl/mngt/ftl_mngt.o 00:05:50.443 CC lib/vhost/vhost.o 00:05:50.701 CC lib/vhost/vhost_rpc.o 00:05:50.701 CC lib/vhost/vhost_scsi.o 00:05:50.701 CC lib/iscsi/init_grp.o 00:05:50.959 CC lib/iscsi/iscsi.o 00:05:50.959 CC lib/iscsi/param.o 00:05:50.959 CC lib/iscsi/portal_grp.o 00:05:50.959 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:50.959 CC lib/iscsi/tgt_node.o 00:05:51.217 CC lib/vhost/vhost_blk.o 00:05:51.217 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:51.217 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:51.217 CC lib/vhost/rte_vhost_user.o 00:05:51.475 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:51.475 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:51.475 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:51.735 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:51.735 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:51.735 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:51.735 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:51.735 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:51.993 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:51.993 CC lib/iscsi/iscsi_subsystem.o 00:05:51.993 CC lib/ftl/utils/ftl_conf.o 00:05:51.993 CC lib/ftl/utils/ftl_md.o 00:05:52.251 CC lib/ftl/utils/ftl_mempool.o 00:05:52.251 CC lib/iscsi/iscsi_rpc.o 00:05:52.251 CC lib/iscsi/task.o 00:05:52.251 CC lib/ftl/utils/ftl_bitmap.o 00:05:52.251 CC lib/ftl/utils/ftl_property.o 00:05:52.251 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:52.510 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:52.510 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:52.510 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:52.768 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:52.768 LIB libspdk_vhost.a 00:05:52.768 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:52.768 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:52.768 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:52.768 SO libspdk_vhost.so.8.0 00:05:52.768 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:53.027 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:53.027 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:53.027 SYMLINK libspdk_vhost.so 00:05:53.027 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:53.027 LIB libspdk_nvmf.a 00:05:53.027 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:53.027 CC lib/ftl/base/ftl_base_dev.o 00:05:53.027 CC lib/ftl/base/ftl_base_bdev.o 00:05:53.027 LIB libspdk_iscsi.a 00:05:53.027 CC lib/ftl/ftl_trace.o 00:05:53.285 SO libspdk_iscsi.so.8.0 00:05:53.285 SO libspdk_nvmf.so.20.0 00:05:53.285 SYMLINK libspdk_iscsi.so 00:05:53.543 LIB libspdk_ftl.a 00:05:53.543 SYMLINK libspdk_nvmf.so 00:05:53.800 SO libspdk_ftl.so.9.0 00:05:54.059 SYMLINK libspdk_ftl.so 00:05:54.317 CC module/env_dpdk/env_dpdk_rpc.o 00:05:54.576 CC module/accel/error/accel_error.o 00:05:54.576 CC module/keyring/file/keyring.o 00:05:54.576 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:54.576 CC module/scheduler/gscheduler/gscheduler.o 00:05:54.576 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:54.576 CC module/keyring/linux/keyring.o 00:05:54.576 CC module/fsdev/aio/fsdev_aio.o 00:05:54.576 CC module/sock/posix/posix.o 00:05:54.576 CC module/blob/bdev/blob_bdev.o 00:05:54.576 LIB libspdk_env_dpdk_rpc.a 00:05:54.576 SO libspdk_env_dpdk_rpc.so.6.0 00:05:54.576 CC module/keyring/file/keyring_rpc.o 00:05:54.834 SYMLINK libspdk_env_dpdk_rpc.so 00:05:54.834 CC module/keyring/linux/keyring_rpc.o 00:05:54.834 LIB libspdk_scheduler_gscheduler.a 00:05:54.834 SO libspdk_scheduler_gscheduler.so.4.0 00:05:54.834 LIB libspdk_scheduler_dpdk_governor.a 00:05:54.834 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:54.834 CC module/accel/error/accel_error_rpc.o 00:05:54.834 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:54.834 SYMLINK libspdk_scheduler_gscheduler.so 00:05:54.834 CC module/fsdev/aio/linux_aio_mgr.o 00:05:54.834 LIB libspdk_scheduler_dynamic.a 00:05:54.834 LIB libspdk_keyring_file.a 00:05:54.834 LIB libspdk_keyring_linux.a 00:05:54.834 SO libspdk_scheduler_dynamic.so.4.0 00:05:54.834 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:54.834 SO libspdk_keyring_file.so.2.0 00:05:54.834 SO libspdk_keyring_linux.so.1.0 00:05:54.834 SYMLINK libspdk_scheduler_dynamic.so 00:05:54.834 SYMLINK libspdk_keyring_file.so 00:05:54.834 SYMLINK libspdk_keyring_linux.so 00:05:55.092 LIB libspdk_accel_error.a 00:05:55.092 LIB libspdk_blob_bdev.a 00:05:55.092 SO libspdk_accel_error.so.2.0 00:05:55.092 SO libspdk_blob_bdev.so.12.0 00:05:55.092 SYMLINK libspdk_accel_error.so 00:05:55.092 SYMLINK libspdk_blob_bdev.so 00:05:55.092 CC module/accel/ioat/accel_ioat.o 00:05:55.092 CC module/accel/ioat/accel_ioat_rpc.o 00:05:55.092 CC module/accel/dsa/accel_dsa.o 00:05:55.092 CC module/accel/dsa/accel_dsa_rpc.o 00:05:55.092 CC module/accel/iaa/accel_iaa_rpc.o 00:05:55.092 CC module/accel/iaa/accel_iaa.o 00:05:55.350 LIB libspdk_accel_ioat.a 00:05:55.350 SO libspdk_accel_ioat.so.6.0 00:05:55.351 LIB libspdk_accel_iaa.a 00:05:55.351 CC module/bdev/delay/vbdev_delay.o 00:05:55.609 SO libspdk_accel_iaa.so.3.0 00:05:55.609 CC module/blobfs/bdev/blobfs_bdev.o 00:05:55.609 SYMLINK libspdk_accel_ioat.so 00:05:55.609 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:55.609 CC module/bdev/gpt/gpt.o 00:05:55.609 LIB libspdk_sock_posix.a 00:05:55.609 CC module/bdev/error/vbdev_error.o 00:05:55.609 CC module/bdev/lvol/vbdev_lvol.o 00:05:55.609 LIB libspdk_fsdev_aio.a 00:05:55.609 SYMLINK libspdk_accel_iaa.so 00:05:55.609 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:55.609 SO libspdk_sock_posix.so.6.0 00:05:55.609 LIB libspdk_accel_dsa.a 00:05:55.609 SO libspdk_fsdev_aio.so.1.0 00:05:55.609 SO libspdk_accel_dsa.so.5.0 00:05:55.609 SYMLINK libspdk_accel_dsa.so 00:05:55.609 SYMLINK libspdk_fsdev_aio.so 00:05:55.609 CC module/bdev/error/vbdev_error_rpc.o 00:05:55.609 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:55.609 SYMLINK libspdk_sock_posix.so 00:05:55.867 CC module/bdev/gpt/vbdev_gpt.o 00:05:55.867 LIB libspdk_blobfs_bdev.a 00:05:55.867 CC module/bdev/malloc/bdev_malloc.o 00:05:55.867 CC module/bdev/null/bdev_null.o 00:05:55.867 LIB libspdk_bdev_error.a 00:05:55.867 SO libspdk_blobfs_bdev.so.6.0 00:05:55.867 LIB libspdk_bdev_delay.a 00:05:55.867 CC module/bdev/nvme/bdev_nvme.o 00:05:56.126 SO libspdk_bdev_error.so.6.0 00:05:56.126 SO libspdk_bdev_delay.so.6.0 00:05:56.126 SYMLINK libspdk_blobfs_bdev.so 00:05:56.126 SYMLINK libspdk_bdev_error.so 00:05:56.126 CC module/bdev/null/bdev_null_rpc.o 00:05:56.126 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:56.126 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:56.126 SYMLINK libspdk_bdev_delay.so 00:05:56.126 CC module/bdev/passthru/vbdev_passthru.o 00:05:56.126 LIB libspdk_bdev_gpt.a 00:05:56.126 SO libspdk_bdev_gpt.so.6.0 00:05:56.126 LIB libspdk_bdev_lvol.a 00:05:56.384 CC module/bdev/raid/bdev_raid.o 00:05:56.384 SO libspdk_bdev_lvol.so.6.0 00:05:56.384 SYMLINK libspdk_bdev_gpt.so 00:05:56.384 CC module/bdev/raid/bdev_raid_rpc.o 00:05:56.384 LIB libspdk_bdev_null.a 00:05:56.384 SO libspdk_bdev_null.so.6.0 00:05:56.384 SYMLINK libspdk_bdev_lvol.so 00:05:56.384 CC module/bdev/nvme/nvme_rpc.o 00:05:56.384 SYMLINK libspdk_bdev_null.so 00:05:56.384 CC module/bdev/nvme/bdev_mdns_client.o 00:05:56.384 CC module/bdev/split/vbdev_split.o 00:05:56.384 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:56.642 LIB libspdk_bdev_malloc.a 00:05:56.642 SO libspdk_bdev_malloc.so.6.0 00:05:56.642 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:56.642 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:56.642 CC module/bdev/raid/bdev_raid_sb.o 00:05:56.642 SYMLINK libspdk_bdev_malloc.so 00:05:56.642 CC module/bdev/nvme/vbdev_opal.o 00:05:56.642 CC module/bdev/split/vbdev_split_rpc.o 00:05:56.900 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:56.900 LIB libspdk_bdev_passthru.a 00:05:56.900 CC module/bdev/aio/bdev_aio.o 00:05:56.900 SO libspdk_bdev_passthru.so.6.0 00:05:56.900 LIB libspdk_bdev_zone_block.a 00:05:56.900 SO libspdk_bdev_zone_block.so.6.0 00:05:56.900 CC module/bdev/aio/bdev_aio_rpc.o 00:05:56.900 LIB libspdk_bdev_split.a 00:05:56.900 SYMLINK libspdk_bdev_passthru.so 00:05:56.900 CC module/bdev/raid/raid0.o 00:05:57.159 SO libspdk_bdev_split.so.6.0 00:05:57.159 SYMLINK libspdk_bdev_zone_block.so 00:05:57.159 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:57.159 SYMLINK libspdk_bdev_split.so 00:05:57.159 CC module/bdev/raid/raid1.o 00:05:57.159 CC module/bdev/raid/concat.o 00:05:57.159 CC module/bdev/raid/raid5f.o 00:05:57.417 CC module/bdev/ftl/bdev_ftl.o 00:05:57.417 CC module/bdev/iscsi/bdev_iscsi.o 00:05:57.417 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:57.417 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:57.417 LIB libspdk_bdev_aio.a 00:05:57.417 SO libspdk_bdev_aio.so.6.0 00:05:57.417 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:57.688 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:57.688 SYMLINK libspdk_bdev_aio.so 00:05:57.688 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:57.976 LIB libspdk_bdev_iscsi.a 00:05:57.976 SO libspdk_bdev_iscsi.so.6.0 00:05:57.976 LIB libspdk_bdev_ftl.a 00:05:57.976 LIB libspdk_bdev_raid.a 00:05:57.976 SO libspdk_bdev_ftl.so.6.0 00:05:57.976 SYMLINK libspdk_bdev_iscsi.so 00:05:57.976 SYMLINK libspdk_bdev_ftl.so 00:05:57.976 SO libspdk_bdev_raid.so.6.0 00:05:58.234 LIB libspdk_bdev_virtio.a 00:05:58.234 SYMLINK libspdk_bdev_raid.so 00:05:58.234 SO libspdk_bdev_virtio.so.6.0 00:05:58.234 SYMLINK libspdk_bdev_virtio.so 00:06:00.137 LIB libspdk_bdev_nvme.a 00:06:00.137 SO libspdk_bdev_nvme.so.7.1 00:06:00.137 SYMLINK libspdk_bdev_nvme.so 00:06:00.706 CC module/event/subsystems/fsdev/fsdev.o 00:06:00.706 CC module/event/subsystems/keyring/keyring.o 00:06:00.706 CC module/event/subsystems/vmd/vmd_rpc.o 00:06:00.706 CC module/event/subsystems/vmd/vmd.o 00:06:00.706 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:06:00.706 CC module/event/subsystems/scheduler/scheduler.o 00:06:00.706 CC module/event/subsystems/iobuf/iobuf.o 00:06:00.706 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:06:00.706 CC module/event/subsystems/sock/sock.o 00:06:00.706 LIB libspdk_event_scheduler.a 00:06:00.706 LIB libspdk_event_fsdev.a 00:06:00.706 SO libspdk_event_scheduler.so.4.0 00:06:00.706 SO libspdk_event_fsdev.so.1.0 00:06:00.706 LIB libspdk_event_sock.a 00:06:00.706 LIB libspdk_event_vhost_blk.a 00:06:00.706 LIB libspdk_event_vmd.a 00:06:00.706 LIB libspdk_event_keyring.a 00:06:00.706 SO libspdk_event_vhost_blk.so.3.0 00:06:00.706 SO libspdk_event_sock.so.5.0 00:06:00.706 SO libspdk_event_vmd.so.6.0 00:06:00.964 SO libspdk_event_keyring.so.1.0 00:06:00.964 SYMLINK libspdk_event_fsdev.so 00:06:00.964 LIB libspdk_event_iobuf.a 00:06:00.964 SYMLINK libspdk_event_scheduler.so 00:06:00.964 SYMLINK libspdk_event_vhost_blk.so 00:06:00.964 SYMLINK libspdk_event_sock.so 00:06:00.964 SO libspdk_event_iobuf.so.3.0 00:06:00.964 SYMLINK libspdk_event_keyring.so 00:06:00.964 SYMLINK libspdk_event_vmd.so 00:06:00.964 SYMLINK libspdk_event_iobuf.so 00:06:01.302 CC module/event/subsystems/accel/accel.o 00:06:01.302 LIB libspdk_event_accel.a 00:06:01.560 SO libspdk_event_accel.so.6.0 00:06:01.560 SYMLINK libspdk_event_accel.so 00:06:01.817 CC module/event/subsystems/bdev/bdev.o 00:06:02.075 LIB libspdk_event_bdev.a 00:06:02.075 SO libspdk_event_bdev.so.6.0 00:06:02.075 SYMLINK libspdk_event_bdev.so 00:06:02.332 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:06:02.332 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:06:02.332 CC module/event/subsystems/ublk/ublk.o 00:06:02.332 CC module/event/subsystems/nbd/nbd.o 00:06:02.332 CC module/event/subsystems/scsi/scsi.o 00:06:02.332 LIB libspdk_event_ublk.a 00:06:02.332 LIB libspdk_event_nbd.a 00:06:02.588 SO libspdk_event_ublk.so.3.0 00:06:02.589 LIB libspdk_event_scsi.a 00:06:02.589 SO libspdk_event_nbd.so.6.0 00:06:02.589 SO libspdk_event_scsi.so.6.0 00:06:02.589 LIB libspdk_event_nvmf.a 00:06:02.589 SYMLINK libspdk_event_ublk.so 00:06:02.589 SYMLINK libspdk_event_nbd.so 00:06:02.589 SO libspdk_event_nvmf.so.6.0 00:06:02.589 SYMLINK libspdk_event_scsi.so 00:06:02.589 SYMLINK libspdk_event_nvmf.so 00:06:02.846 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:06:02.846 CC module/event/subsystems/iscsi/iscsi.o 00:06:03.105 LIB libspdk_event_vhost_scsi.a 00:06:03.105 LIB libspdk_event_iscsi.a 00:06:03.105 SO libspdk_event_vhost_scsi.so.3.0 00:06:03.105 SO libspdk_event_iscsi.so.6.0 00:06:03.105 SYMLINK libspdk_event_vhost_scsi.so 00:06:03.105 SYMLINK libspdk_event_iscsi.so 00:06:03.390 SO libspdk.so.6.0 00:06:03.390 SYMLINK libspdk.so 00:06:03.648 CXX app/trace/trace.o 00:06:03.648 CC app/trace_record/trace_record.o 00:06:03.648 CC app/spdk_lspci/spdk_lspci.o 00:06:03.648 CC app/spdk_nvme_perf/perf.o 00:06:03.648 CC app/nvmf_tgt/nvmf_main.o 00:06:03.648 CC app/iscsi_tgt/iscsi_tgt.o 00:06:03.648 CC app/spdk_tgt/spdk_tgt.o 00:06:03.648 CC test/thread/poller_perf/poller_perf.o 00:06:03.648 CC examples/util/zipf/zipf.o 00:06:03.648 CC examples/ioat/perf/perf.o 00:06:03.648 LINK spdk_lspci 00:06:03.907 LINK nvmf_tgt 00:06:03.907 LINK poller_perf 00:06:03.907 LINK spdk_trace_record 00:06:03.907 LINK zipf 00:06:03.907 LINK iscsi_tgt 00:06:03.907 LINK spdk_tgt 00:06:04.166 LINK ioat_perf 00:06:04.166 LINK spdk_trace 00:06:04.166 CC app/spdk_nvme_identify/identify.o 00:06:04.166 CC app/spdk_nvme_discover/discovery_aer.o 00:06:04.166 CC examples/ioat/verify/verify.o 00:06:04.424 CC examples/interrupt_tgt/interrupt_tgt.o 00:06:04.425 CC app/spdk_top/spdk_top.o 00:06:04.425 CC test/dma/test_dma/test_dma.o 00:06:04.425 CC app/spdk_dd/spdk_dd.o 00:06:04.425 CC app/fio/nvme/fio_plugin.o 00:06:04.425 LINK spdk_nvme_discover 00:06:04.425 LINK verify 00:06:04.425 LINK interrupt_tgt 00:06:04.684 CC examples/thread/thread/thread_ex.o 00:06:04.684 LINK spdk_nvme_perf 00:06:04.684 TEST_HEADER include/spdk/accel.h 00:06:04.942 TEST_HEADER include/spdk/accel_module.h 00:06:04.942 TEST_HEADER include/spdk/assert.h 00:06:04.942 TEST_HEADER include/spdk/barrier.h 00:06:04.942 TEST_HEADER include/spdk/base64.h 00:06:04.942 TEST_HEADER include/spdk/bdev.h 00:06:04.942 TEST_HEADER include/spdk/bdev_module.h 00:06:04.942 TEST_HEADER include/spdk/bdev_zone.h 00:06:04.943 TEST_HEADER include/spdk/bit_array.h 00:06:04.943 TEST_HEADER include/spdk/bit_pool.h 00:06:04.943 TEST_HEADER include/spdk/blob_bdev.h 00:06:04.943 TEST_HEADER include/spdk/blobfs_bdev.h 00:06:04.943 TEST_HEADER include/spdk/blobfs.h 00:06:04.943 TEST_HEADER include/spdk/blob.h 00:06:04.943 TEST_HEADER include/spdk/conf.h 00:06:04.943 TEST_HEADER include/spdk/config.h 00:06:04.943 TEST_HEADER include/spdk/cpuset.h 00:06:04.943 TEST_HEADER include/spdk/crc16.h 00:06:04.943 TEST_HEADER include/spdk/crc32.h 00:06:04.943 TEST_HEADER include/spdk/crc64.h 00:06:04.943 TEST_HEADER include/spdk/dif.h 00:06:04.943 LINK spdk_dd 00:06:04.943 TEST_HEADER include/spdk/dma.h 00:06:04.943 TEST_HEADER include/spdk/endian.h 00:06:04.943 TEST_HEADER include/spdk/env_dpdk.h 00:06:04.943 TEST_HEADER include/spdk/env.h 00:06:04.943 TEST_HEADER include/spdk/event.h 00:06:04.943 TEST_HEADER include/spdk/fd_group.h 00:06:04.943 TEST_HEADER include/spdk/fd.h 00:06:04.943 TEST_HEADER include/spdk/file.h 00:06:04.943 TEST_HEADER include/spdk/fsdev.h 00:06:04.943 TEST_HEADER include/spdk/fsdev_module.h 00:06:04.943 TEST_HEADER include/spdk/ftl.h 00:06:04.943 TEST_HEADER include/spdk/fuse_dispatcher.h 00:06:04.943 TEST_HEADER include/spdk/gpt_spec.h 00:06:04.943 CC examples/sock/hello_world/hello_sock.o 00:06:04.943 TEST_HEADER include/spdk/hexlify.h 00:06:04.943 TEST_HEADER include/spdk/histogram_data.h 00:06:04.943 TEST_HEADER include/spdk/idxd.h 00:06:04.943 TEST_HEADER include/spdk/idxd_spec.h 00:06:04.943 TEST_HEADER include/spdk/init.h 00:06:04.943 TEST_HEADER include/spdk/ioat.h 00:06:04.943 TEST_HEADER include/spdk/ioat_spec.h 00:06:04.943 TEST_HEADER include/spdk/iscsi_spec.h 00:06:04.943 TEST_HEADER include/spdk/json.h 00:06:04.943 TEST_HEADER include/spdk/jsonrpc.h 00:06:04.943 TEST_HEADER include/spdk/keyring.h 00:06:04.943 LINK thread 00:06:04.943 TEST_HEADER include/spdk/keyring_module.h 00:06:04.943 CC test/app/bdev_svc/bdev_svc.o 00:06:04.943 TEST_HEADER include/spdk/likely.h 00:06:04.943 TEST_HEADER include/spdk/log.h 00:06:04.943 TEST_HEADER include/spdk/lvol.h 00:06:04.943 TEST_HEADER include/spdk/md5.h 00:06:04.943 TEST_HEADER include/spdk/memory.h 00:06:04.943 TEST_HEADER include/spdk/mmio.h 00:06:04.943 TEST_HEADER include/spdk/nbd.h 00:06:04.943 TEST_HEADER include/spdk/net.h 00:06:04.943 TEST_HEADER include/spdk/notify.h 00:06:04.943 TEST_HEADER include/spdk/nvme.h 00:06:04.943 TEST_HEADER include/spdk/nvme_intel.h 00:06:04.943 TEST_HEADER include/spdk/nvme_ocssd.h 00:06:04.943 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:06:04.943 TEST_HEADER include/spdk/nvme_spec.h 00:06:04.943 TEST_HEADER include/spdk/nvme_zns.h 00:06:04.943 TEST_HEADER include/spdk/nvmf_cmd.h 00:06:04.943 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:06:04.943 TEST_HEADER include/spdk/nvmf.h 00:06:04.943 TEST_HEADER include/spdk/nvmf_spec.h 00:06:04.943 TEST_HEADER include/spdk/nvmf_transport.h 00:06:04.943 TEST_HEADER include/spdk/opal.h 00:06:04.943 TEST_HEADER include/spdk/opal_spec.h 00:06:04.943 TEST_HEADER include/spdk/pci_ids.h 00:06:04.943 TEST_HEADER include/spdk/pipe.h 00:06:04.943 TEST_HEADER include/spdk/queue.h 00:06:04.943 TEST_HEADER include/spdk/reduce.h 00:06:04.943 TEST_HEADER include/spdk/rpc.h 00:06:04.943 TEST_HEADER include/spdk/scheduler.h 00:06:04.943 TEST_HEADER include/spdk/scsi.h 00:06:04.943 TEST_HEADER include/spdk/scsi_spec.h 00:06:04.943 TEST_HEADER include/spdk/sock.h 00:06:04.943 TEST_HEADER include/spdk/stdinc.h 00:06:04.943 TEST_HEADER include/spdk/string.h 00:06:04.943 TEST_HEADER include/spdk/thread.h 00:06:04.943 TEST_HEADER include/spdk/trace.h 00:06:04.943 TEST_HEADER include/spdk/trace_parser.h 00:06:04.943 TEST_HEADER include/spdk/tree.h 00:06:04.943 TEST_HEADER include/spdk/ublk.h 00:06:04.943 TEST_HEADER include/spdk/util.h 00:06:04.943 TEST_HEADER include/spdk/uuid.h 00:06:04.943 TEST_HEADER include/spdk/version.h 00:06:04.943 TEST_HEADER include/spdk/vfio_user_pci.h 00:06:04.943 TEST_HEADER include/spdk/vfio_user_spec.h 00:06:04.943 TEST_HEADER include/spdk/vhost.h 00:06:04.943 TEST_HEADER include/spdk/vmd.h 00:06:04.943 TEST_HEADER include/spdk/xor.h 00:06:04.943 TEST_HEADER include/spdk/zipf.h 00:06:04.943 CXX test/cpp_headers/accel.o 00:06:05.202 LINK test_dma 00:06:05.202 CC app/vhost/vhost.o 00:06:05.202 LINK bdev_svc 00:06:05.202 LINK spdk_nvme 00:06:05.202 LINK hello_sock 00:06:05.202 CXX test/cpp_headers/accel_module.o 00:06:05.202 CC app/fio/bdev/fio_plugin.o 00:06:05.202 CXX test/cpp_headers/assert.o 00:06:05.202 LINK spdk_nvme_identify 00:06:05.461 CXX test/cpp_headers/barrier.o 00:06:05.461 LINK vhost 00:06:05.461 CC test/env/mem_callbacks/mem_callbacks.o 00:06:05.461 CXX test/cpp_headers/base64.o 00:06:05.461 LINK spdk_top 00:06:05.461 CC test/app/histogram_perf/histogram_perf.o 00:06:05.719 CC examples/vmd/lsvmd/lsvmd.o 00:06:05.719 CC test/env/vtophys/vtophys.o 00:06:05.719 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:06:05.719 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:06:05.719 CC test/env/memory/memory_ut.o 00:06:05.719 CXX test/cpp_headers/bdev.o 00:06:05.719 LINK lsvmd 00:06:05.719 LINK histogram_perf 00:06:05.719 LINK vtophys 00:06:05.719 CC test/app/jsoncat/jsoncat.o 00:06:05.978 LINK env_dpdk_post_init 00:06:05.978 CXX test/cpp_headers/bdev_module.o 00:06:05.978 LINK spdk_bdev 00:06:05.978 LINK jsoncat 00:06:05.978 CC examples/vmd/led/led.o 00:06:06.237 LINK mem_callbacks 00:06:06.237 CC test/event/event_perf/event_perf.o 00:06:06.237 LINK nvme_fuzz 00:06:06.237 CXX test/cpp_headers/bdev_zone.o 00:06:06.237 CC test/nvme/aer/aer.o 00:06:06.237 CC test/nvme/reset/reset.o 00:06:06.237 LINK led 00:06:06.237 CC test/nvme/sgl/sgl.o 00:06:06.237 CC test/nvme/e2edp/nvme_dp.o 00:06:06.497 CXX test/cpp_headers/bit_array.o 00:06:06.497 CC test/rpc_client/rpc_client_test.o 00:06:06.497 LINK event_perf 00:06:06.497 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:06:06.497 LINK reset 00:06:06.497 LINK aer 00:06:06.497 CXX test/cpp_headers/bit_pool.o 00:06:06.756 LINK sgl 00:06:06.756 LINK rpc_client_test 00:06:06.756 CC examples/idxd/perf/perf.o 00:06:06.756 LINK nvme_dp 00:06:06.756 CXX test/cpp_headers/blob_bdev.o 00:06:06.756 CC test/event/reactor/reactor.o 00:06:06.756 CC test/nvme/overhead/overhead.o 00:06:07.016 CC test/nvme/err_injection/err_injection.o 00:06:07.016 CXX test/cpp_headers/blobfs_bdev.o 00:06:07.016 LINK reactor 00:06:07.016 LINK idxd_perf 00:06:07.016 CC examples/accel/perf/accel_perf.o 00:06:07.016 CC examples/fsdev/hello_world/hello_fsdev.o 00:06:07.016 LINK memory_ut 00:06:07.016 CC examples/blob/hello_world/hello_blob.o 00:06:07.275 LINK err_injection 00:06:07.275 LINK overhead 00:06:07.275 CXX test/cpp_headers/blobfs.o 00:06:07.275 CC test/event/reactor_perf/reactor_perf.o 00:06:07.275 CC test/event/app_repeat/app_repeat.o 00:06:07.534 LINK hello_blob 00:06:07.534 LINK hello_fsdev 00:06:07.534 CXX test/cpp_headers/blob.o 00:06:07.534 CC test/env/pci/pci_ut.o 00:06:07.534 LINK reactor_perf 00:06:07.534 CC test/event/scheduler/scheduler.o 00:06:07.534 CC test/nvme/startup/startup.o 00:06:07.793 CXX test/cpp_headers/conf.o 00:06:07.793 LINK app_repeat 00:06:07.793 LINK accel_perf 00:06:07.793 CC test/app/stub/stub.o 00:06:07.793 CC examples/blob/cli/blobcli.o 00:06:07.793 LINK scheduler 00:06:07.793 LINK startup 00:06:07.793 CXX test/cpp_headers/config.o 00:06:08.052 CC examples/nvme/hello_world/hello_world.o 00:06:08.052 CXX test/cpp_headers/cpuset.o 00:06:08.052 CXX test/cpp_headers/crc16.o 00:06:08.052 CC examples/nvme/reconnect/reconnect.o 00:06:08.052 LINK pci_ut 00:06:08.052 LINK stub 00:06:08.052 CXX test/cpp_headers/crc32.o 00:06:08.052 CC test/nvme/reserve/reserve.o 00:06:08.319 LINK hello_world 00:06:08.319 CC test/nvme/simple_copy/simple_copy.o 00:06:08.319 CC test/nvme/connect_stress/connect_stress.o 00:06:08.319 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:06:08.319 CXX test/cpp_headers/crc64.o 00:06:08.319 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:06:08.319 CC examples/nvme/nvme_manage/nvme_manage.o 00:06:08.592 LINK reconnect 00:06:08.592 LINK blobcli 00:06:08.592 LINK connect_stress 00:06:08.592 LINK simple_copy 00:06:08.592 CXX test/cpp_headers/dif.o 00:06:08.592 CC test/nvme/boot_partition/boot_partition.o 00:06:08.592 LINK reserve 00:06:08.851 CXX test/cpp_headers/dma.o 00:06:08.851 LINK iscsi_fuzz 00:06:08.851 CC test/nvme/compliance/nvme_compliance.o 00:06:08.851 CC test/nvme/fused_ordering/fused_ordering.o 00:06:08.851 CC test/nvme/doorbell_aers/doorbell_aers.o 00:06:08.851 CC test/accel/dif/dif.o 00:06:08.851 CC test/nvme/fdp/fdp.o 00:06:08.851 LINK vhost_fuzz 00:06:08.851 CXX test/cpp_headers/endian.o 00:06:09.109 LINK boot_partition 00:06:09.109 LINK nvme_manage 00:06:09.109 LINK doorbell_aers 00:06:09.109 LINK fused_ordering 00:06:09.109 CC test/nvme/cuse/cuse.o 00:06:09.109 CXX test/cpp_headers/env_dpdk.o 00:06:09.368 LINK nvme_compliance 00:06:09.368 CXX test/cpp_headers/env.o 00:06:09.368 CC test/blobfs/mkfs/mkfs.o 00:06:09.368 LINK fdp 00:06:09.368 CC examples/nvme/arbitration/arbitration.o 00:06:09.368 CC examples/nvme/hotplug/hotplug.o 00:06:09.368 CC examples/nvme/cmb_copy/cmb_copy.o 00:06:09.626 CC examples/nvme/abort/abort.o 00:06:09.626 CXX test/cpp_headers/event.o 00:06:09.626 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:06:09.626 LINK cmb_copy 00:06:09.626 LINK mkfs 00:06:09.626 CXX test/cpp_headers/fd_group.o 00:06:09.883 LINK hotplug 00:06:09.883 LINK pmr_persistence 00:06:09.883 LINK dif 00:06:09.883 LINK arbitration 00:06:09.883 CXX test/cpp_headers/fd.o 00:06:09.883 CXX test/cpp_headers/file.o 00:06:10.140 CXX test/cpp_headers/fsdev.o 00:06:10.140 LINK abort 00:06:10.140 CC test/lvol/esnap/esnap.o 00:06:10.140 CXX test/cpp_headers/fsdev_module.o 00:06:10.140 CXX test/cpp_headers/ftl.o 00:06:10.140 CXX test/cpp_headers/fuse_dispatcher.o 00:06:10.140 CC examples/bdev/hello_world/hello_bdev.o 00:06:10.140 CXX test/cpp_headers/gpt_spec.o 00:06:10.140 CC examples/bdev/bdevperf/bdevperf.o 00:06:10.399 CXX test/cpp_headers/hexlify.o 00:06:10.399 CC test/bdev/bdevio/bdevio.o 00:06:10.399 CXX test/cpp_headers/histogram_data.o 00:06:10.399 CXX test/cpp_headers/idxd.o 00:06:10.399 CXX test/cpp_headers/idxd_spec.o 00:06:10.399 CXX test/cpp_headers/init.o 00:06:10.399 LINK hello_bdev 00:06:10.399 CXX test/cpp_headers/ioat.o 00:06:10.656 CXX test/cpp_headers/ioat_spec.o 00:06:10.656 CXX test/cpp_headers/iscsi_spec.o 00:06:10.656 CXX test/cpp_headers/json.o 00:06:10.656 CXX test/cpp_headers/jsonrpc.o 00:06:10.656 CXX test/cpp_headers/keyring.o 00:06:10.656 CXX test/cpp_headers/keyring_module.o 00:06:10.656 CXX test/cpp_headers/likely.o 00:06:10.656 CXX test/cpp_headers/log.o 00:06:10.913 CXX test/cpp_headers/lvol.o 00:06:10.913 LINK bdevio 00:06:10.913 LINK cuse 00:06:10.913 CXX test/cpp_headers/md5.o 00:06:10.914 CXX test/cpp_headers/memory.o 00:06:10.914 CXX test/cpp_headers/mmio.o 00:06:10.914 CXX test/cpp_headers/nbd.o 00:06:10.914 CXX test/cpp_headers/net.o 00:06:10.914 CXX test/cpp_headers/notify.o 00:06:10.914 CXX test/cpp_headers/nvme.o 00:06:11.170 CXX test/cpp_headers/nvme_intel.o 00:06:11.170 CXX test/cpp_headers/nvme_ocssd.o 00:06:11.170 CXX test/cpp_headers/nvme_ocssd_spec.o 00:06:11.170 CXX test/cpp_headers/nvme_spec.o 00:06:11.170 CXX test/cpp_headers/nvme_zns.o 00:06:11.170 CXX test/cpp_headers/nvmf_cmd.o 00:06:11.170 CXX test/cpp_headers/nvmf_fc_spec.o 00:06:11.170 CXX test/cpp_headers/nvmf.o 00:06:11.428 LINK bdevperf 00:06:11.428 CXX test/cpp_headers/nvmf_spec.o 00:06:11.428 CXX test/cpp_headers/nvmf_transport.o 00:06:11.428 CXX test/cpp_headers/opal.o 00:06:11.428 CXX test/cpp_headers/opal_spec.o 00:06:11.428 CXX test/cpp_headers/pci_ids.o 00:06:11.428 CXX test/cpp_headers/pipe.o 00:06:11.428 CXX test/cpp_headers/queue.o 00:06:11.428 CXX test/cpp_headers/reduce.o 00:06:11.428 CXX test/cpp_headers/rpc.o 00:06:11.428 CXX test/cpp_headers/scheduler.o 00:06:11.686 CXX test/cpp_headers/scsi.o 00:06:11.686 CXX test/cpp_headers/scsi_spec.o 00:06:11.686 CXX test/cpp_headers/stdinc.o 00:06:11.686 CXX test/cpp_headers/sock.o 00:06:11.686 CXX test/cpp_headers/string.o 00:06:11.686 CXX test/cpp_headers/thread.o 00:06:11.686 CXX test/cpp_headers/trace.o 00:06:11.686 CXX test/cpp_headers/trace_parser.o 00:06:11.686 CXX test/cpp_headers/tree.o 00:06:11.686 CXX test/cpp_headers/ublk.o 00:06:11.686 CXX test/cpp_headers/util.o 00:06:11.686 CC examples/nvmf/nvmf/nvmf.o 00:06:11.944 CXX test/cpp_headers/uuid.o 00:06:11.944 CXX test/cpp_headers/version.o 00:06:11.944 CXX test/cpp_headers/vfio_user_pci.o 00:06:11.944 CXX test/cpp_headers/vfio_user_spec.o 00:06:11.944 CXX test/cpp_headers/vhost.o 00:06:11.944 CXX test/cpp_headers/vmd.o 00:06:11.944 CXX test/cpp_headers/xor.o 00:06:11.944 CXX test/cpp_headers/zipf.o 00:06:12.202 LINK nvmf 00:06:17.549 LINK esnap 00:06:17.809 00:06:17.809 real 1m47.671s 00:06:17.809 user 10m10.560s 00:06:17.809 sys 1m53.070s 00:06:17.809 18:04:43 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:17.809 18:04:43 make -- common/autotest_common.sh@10 -- $ set +x 00:06:17.809 ************************************ 00:06:17.809 END TEST make 00:06:17.809 ************************************ 00:06:17.809 18:04:43 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:17.809 18:04:43 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:17.809 18:04:43 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:17.809 18:04:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:17.809 18:04:43 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:06:17.809 18:04:43 -- pm/common@44 -- $ pid=5247 00:06:17.809 18:04:43 -- pm/common@50 -- $ kill -TERM 5247 00:06:17.809 18:04:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:17.809 18:04:43 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:06:17.809 18:04:43 -- pm/common@44 -- $ pid=5248 00:06:17.809 18:04:43 -- pm/common@50 -- $ kill -TERM 5248 00:06:17.809 18:04:43 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:06:17.809 18:04:43 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:17.809 18:04:43 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:17.809 18:04:43 -- common/autotest_common.sh@1711 -- # lcov --version 00:06:17.809 18:04:43 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:18.068 18:04:43 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:18.068 18:04:43 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.068 18:04:43 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.068 18:04:43 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.068 18:04:43 -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.068 18:04:43 -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.068 18:04:43 -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.068 18:04:43 -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.068 18:04:43 -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.068 18:04:43 -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.068 18:04:43 -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.068 18:04:43 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.068 18:04:43 -- scripts/common.sh@344 -- # case "$op" in 00:06:18.068 18:04:43 -- scripts/common.sh@345 -- # : 1 00:06:18.068 18:04:43 -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.068 18:04:43 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.068 18:04:43 -- scripts/common.sh@365 -- # decimal 1 00:06:18.068 18:04:43 -- scripts/common.sh@353 -- # local d=1 00:06:18.068 18:04:43 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.068 18:04:43 -- scripts/common.sh@355 -- # echo 1 00:06:18.068 18:04:43 -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.068 18:04:43 -- scripts/common.sh@366 -- # decimal 2 00:06:18.068 18:04:43 -- scripts/common.sh@353 -- # local d=2 00:06:18.068 18:04:43 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.068 18:04:43 -- scripts/common.sh@355 -- # echo 2 00:06:18.068 18:04:43 -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.068 18:04:43 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.068 18:04:43 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.068 18:04:43 -- scripts/common.sh@368 -- # return 0 00:06:18.068 18:04:43 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.068 18:04:43 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:18.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.068 --rc genhtml_branch_coverage=1 00:06:18.068 --rc genhtml_function_coverage=1 00:06:18.068 --rc genhtml_legend=1 00:06:18.068 --rc geninfo_all_blocks=1 00:06:18.068 --rc geninfo_unexecuted_blocks=1 00:06:18.068 00:06:18.068 ' 00:06:18.068 18:04:43 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:18.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.068 --rc genhtml_branch_coverage=1 00:06:18.068 --rc genhtml_function_coverage=1 00:06:18.068 --rc genhtml_legend=1 00:06:18.068 --rc geninfo_all_blocks=1 00:06:18.068 --rc geninfo_unexecuted_blocks=1 00:06:18.068 00:06:18.068 ' 00:06:18.068 18:04:43 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:18.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.068 --rc genhtml_branch_coverage=1 00:06:18.068 --rc genhtml_function_coverage=1 00:06:18.068 --rc genhtml_legend=1 00:06:18.068 --rc geninfo_all_blocks=1 00:06:18.068 --rc geninfo_unexecuted_blocks=1 00:06:18.068 00:06:18.068 ' 00:06:18.068 18:04:43 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:18.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.068 --rc genhtml_branch_coverage=1 00:06:18.068 --rc genhtml_function_coverage=1 00:06:18.068 --rc genhtml_legend=1 00:06:18.068 --rc geninfo_all_blocks=1 00:06:18.068 --rc geninfo_unexecuted_blocks=1 00:06:18.068 00:06:18.068 ' 00:06:18.068 18:04:43 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:18.068 18:04:43 -- nvmf/common.sh@7 -- # uname -s 00:06:18.068 18:04:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:18.068 18:04:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:18.068 18:04:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:18.068 18:04:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:18.068 18:04:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:18.068 18:04:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:18.068 18:04:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:18.068 18:04:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:18.068 18:04:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:18.068 18:04:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:18.068 18:04:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e80ab19-9c15-4076-89d4-bbd3dd84ce33 00:06:18.068 18:04:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=2e80ab19-9c15-4076-89d4-bbd3dd84ce33 00:06:18.068 18:04:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:18.068 18:04:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:18.068 18:04:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:18.068 18:04:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:18.068 18:04:43 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:18.068 18:04:43 -- scripts/common.sh@15 -- # shopt -s extglob 00:06:18.068 18:04:43 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:18.068 18:04:43 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:18.068 18:04:43 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:18.068 18:04:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.068 18:04:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.068 18:04:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.068 18:04:43 -- paths/export.sh@5 -- # export PATH 00:06:18.068 18:04:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.068 18:04:43 -- nvmf/common.sh@51 -- # : 0 00:06:18.068 18:04:43 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:18.068 18:04:43 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:18.068 18:04:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:18.068 18:04:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:18.068 18:04:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:18.068 18:04:43 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:18.068 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:18.068 18:04:43 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:18.068 18:04:43 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:18.068 18:04:43 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:18.068 18:04:43 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:18.068 18:04:43 -- spdk/autotest.sh@32 -- # uname -s 00:06:18.068 18:04:43 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:18.068 18:04:43 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:18.068 18:04:43 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:18.068 18:04:43 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:06:18.068 18:04:43 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:18.068 18:04:43 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:18.068 18:04:43 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:18.068 18:04:43 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:18.069 18:04:43 -- spdk/autotest.sh@48 -- # udevadm_pid=54423 00:06:18.069 18:04:43 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:18.069 18:04:43 -- pm/common@17 -- # local monitor 00:06:18.069 18:04:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:18.069 18:04:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:18.069 18:04:43 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:18.069 18:04:43 -- pm/common@25 -- # sleep 1 00:06:18.069 18:04:43 -- pm/common@21 -- # date +%s 00:06:18.069 18:04:43 -- pm/common@21 -- # date +%s 00:06:18.069 18:04:43 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733508283 00:06:18.069 18:04:43 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733508283 00:06:18.069 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733508283_collect-vmstat.pm.log 00:06:18.069 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733508283_collect-cpu-load.pm.log 00:06:19.004 18:04:44 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:19.004 18:04:44 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:19.004 18:04:44 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:19.004 18:04:44 -- common/autotest_common.sh@10 -- # set +x 00:06:19.004 18:04:44 -- spdk/autotest.sh@59 -- # create_test_list 00:06:19.004 18:04:44 -- common/autotest_common.sh@752 -- # xtrace_disable 00:06:19.004 18:04:44 -- common/autotest_common.sh@10 -- # set +x 00:06:19.004 18:04:44 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:06:19.004 18:04:44 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:06:19.004 18:04:44 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:06:19.004 18:04:44 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:06:19.004 18:04:44 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:06:19.004 18:04:44 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:19.004 18:04:44 -- common/autotest_common.sh@1457 -- # uname 00:06:19.004 18:04:44 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:06:19.004 18:04:44 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:19.004 18:04:44 -- common/autotest_common.sh@1477 -- # uname 00:06:19.004 18:04:44 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:06:19.004 18:04:44 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:06:19.004 18:04:44 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:06:19.308 lcov: LCOV version 1.15 00:06:19.309 18:04:44 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:06:37.396 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:37.396 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:55.612 18:05:18 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:55.612 18:05:18 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:55.612 18:05:18 -- common/autotest_common.sh@10 -- # set +x 00:06:55.612 18:05:18 -- spdk/autotest.sh@78 -- # rm -f 00:06:55.612 18:05:18 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:55.612 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:55.612 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:06:55.612 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:06:55.612 18:05:19 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:55.612 18:05:19 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:55.612 18:05:19 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:55.612 18:05:19 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:06:55.612 18:05:19 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:06:55.612 18:05:19 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:06:55.612 18:05:19 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:55.612 18:05:19 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:06:55.612 18:05:19 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:55.612 18:05:19 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:06:55.612 18:05:19 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:55.612 18:05:19 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:55.612 18:05:19 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:55.612 18:05:19 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:55.612 18:05:19 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:06:55.612 18:05:19 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:55.612 18:05:19 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:06:55.612 18:05:19 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:55.612 18:05:19 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:55.612 18:05:19 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:55.612 18:05:19 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:55.612 18:05:19 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:06:55.612 18:05:19 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:06:55.612 18:05:19 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:06:55.612 18:05:19 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:55.612 18:05:19 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:55.612 18:05:19 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:06:55.612 18:05:19 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:06:55.612 18:05:19 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:06:55.612 18:05:19 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:55.612 18:05:19 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:55.612 18:05:19 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:55.612 18:05:19 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:55.612 18:05:19 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:55.612 18:05:19 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:55.612 18:05:19 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:55.612 No valid GPT data, bailing 00:06:55.612 18:05:19 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:55.612 18:05:19 -- scripts/common.sh@394 -- # pt= 00:06:55.612 18:05:19 -- scripts/common.sh@395 -- # return 1 00:06:55.612 18:05:19 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:55.612 1+0 records in 00:06:55.612 1+0 records out 00:06:55.612 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00387646 s, 270 MB/s 00:06:55.612 18:05:19 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:55.612 18:05:19 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:55.612 18:05:19 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:06:55.612 18:05:19 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:06:55.612 18:05:19 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:06:55.612 No valid GPT data, bailing 00:06:55.612 18:05:19 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:55.612 18:05:19 -- scripts/common.sh@394 -- # pt= 00:06:55.612 18:05:19 -- scripts/common.sh@395 -- # return 1 00:06:55.612 18:05:19 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:06:55.612 1+0 records in 00:06:55.612 1+0 records out 00:06:55.612 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00507453 s, 207 MB/s 00:06:55.612 18:05:19 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:55.612 18:05:19 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:55.612 18:05:19 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:06:55.612 18:05:19 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:06:55.612 18:05:19 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:06:55.612 No valid GPT data, bailing 00:06:55.612 18:05:19 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:06:55.612 18:05:19 -- scripts/common.sh@394 -- # pt= 00:06:55.612 18:05:19 -- scripts/common.sh@395 -- # return 1 00:06:55.612 18:05:19 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:06:55.612 1+0 records in 00:06:55.612 1+0 records out 00:06:55.612 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00502016 s, 209 MB/s 00:06:55.612 18:05:19 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:55.612 18:05:19 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:55.612 18:05:19 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:06:55.612 18:05:19 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:06:55.612 18:05:19 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:06:55.612 No valid GPT data, bailing 00:06:55.612 18:05:20 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:06:55.612 18:05:20 -- scripts/common.sh@394 -- # pt= 00:06:55.612 18:05:20 -- scripts/common.sh@395 -- # return 1 00:06:55.612 18:05:20 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:06:55.612 1+0 records in 00:06:55.612 1+0 records out 00:06:55.612 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00408202 s, 257 MB/s 00:06:55.612 18:05:20 -- spdk/autotest.sh@105 -- # sync 00:06:55.612 18:05:20 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:55.612 18:05:20 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:55.612 18:05:20 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:56.988 18:05:22 -- spdk/autotest.sh@111 -- # uname -s 00:06:56.988 18:05:22 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:56.988 18:05:22 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:56.988 18:05:22 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:57.247 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:57.247 Hugepages 00:06:57.247 node hugesize free / total 00:06:57.247 node0 1048576kB 0 / 0 00:06:57.247 node0 2048kB 0 / 0 00:06:57.247 00:06:57.247 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:57.506 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:57.506 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:57.506 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:06:57.506 18:05:22 -- spdk/autotest.sh@117 -- # uname -s 00:06:57.506 18:05:22 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:57.506 18:05:22 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:57.506 18:05:22 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:58.442 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:58.442 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:58.442 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:58.442 18:05:23 -- common/autotest_common.sh@1517 -- # sleep 1 00:06:59.404 18:05:24 -- common/autotest_common.sh@1518 -- # bdfs=() 00:06:59.404 18:05:24 -- common/autotest_common.sh@1518 -- # local bdfs 00:06:59.404 18:05:24 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:06:59.404 18:05:24 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:06:59.404 18:05:24 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:59.404 18:05:24 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:59.404 18:05:24 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:59.404 18:05:24 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:59.404 18:05:24 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:59.404 18:05:24 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:06:59.404 18:05:24 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:59.404 18:05:24 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:59.979 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:59.979 Waiting for block devices as requested 00:06:59.979 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:59.979 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:00.238 18:05:25 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:00.238 18:05:25 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:07:00.238 18:05:25 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:07:00.238 18:05:25 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:07:00.238 18:05:25 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:00.238 18:05:25 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:07:00.238 18:05:25 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:00.238 18:05:25 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:07:00.238 18:05:25 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:07:00.238 18:05:25 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:07:00.238 18:05:25 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:07:00.238 18:05:25 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:00.238 18:05:25 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:00.238 18:05:25 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:00.238 18:05:25 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:00.238 18:05:25 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:00.238 18:05:25 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:07:00.238 18:05:25 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:00.238 18:05:25 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:00.238 18:05:25 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:00.238 18:05:25 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:00.238 18:05:25 -- common/autotest_common.sh@1543 -- # continue 00:07:00.238 18:05:25 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:00.238 18:05:25 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:07:00.238 18:05:25 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:07:00.238 18:05:25 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:07:00.238 18:05:25 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:00.238 18:05:25 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:07:00.238 18:05:25 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:00.238 18:05:25 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:07:00.238 18:05:25 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:07:00.238 18:05:25 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:07:00.238 18:05:25 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:07:00.238 18:05:25 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:00.238 18:05:25 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:00.238 18:05:25 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:00.238 18:05:25 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:00.238 18:05:25 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:00.238 18:05:25 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:07:00.238 18:05:25 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:00.238 18:05:25 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:00.238 18:05:25 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:00.238 18:05:25 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:00.238 18:05:25 -- common/autotest_common.sh@1543 -- # continue 00:07:00.238 18:05:25 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:07:00.238 18:05:25 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:00.238 18:05:25 -- common/autotest_common.sh@10 -- # set +x 00:07:00.238 18:05:25 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:07:00.238 18:05:25 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:00.238 18:05:25 -- common/autotest_common.sh@10 -- # set +x 00:07:00.238 18:05:25 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:00.805 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:01.063 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:01.063 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:01.063 18:05:26 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:07:01.063 18:05:26 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:01.063 18:05:26 -- common/autotest_common.sh@10 -- # set +x 00:07:01.063 18:05:26 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:07:01.063 18:05:26 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:07:01.063 18:05:26 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:07:01.063 18:05:26 -- common/autotest_common.sh@1563 -- # bdfs=() 00:07:01.063 18:05:26 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:07:01.063 18:05:26 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:07:01.063 18:05:26 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:07:01.063 18:05:26 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:07:01.063 18:05:26 -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:01.063 18:05:26 -- common/autotest_common.sh@1498 -- # local bdfs 00:07:01.063 18:05:26 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:01.063 18:05:26 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:01.063 18:05:26 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:01.323 18:05:26 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:07:01.323 18:05:26 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:01.323 18:05:26 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:01.323 18:05:26 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:07:01.323 18:05:26 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:01.323 18:05:26 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:01.323 18:05:26 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:01.323 18:05:26 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:07:01.323 18:05:26 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:01.323 18:05:26 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:01.323 18:05:26 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:07:01.323 18:05:26 -- common/autotest_common.sh@1572 -- # return 0 00:07:01.323 18:05:26 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:07:01.323 18:05:26 -- common/autotest_common.sh@1580 -- # return 0 00:07:01.323 18:05:26 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:07:01.323 18:05:26 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:07:01.323 18:05:26 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:01.323 18:05:26 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:01.323 18:05:26 -- spdk/autotest.sh@149 -- # timing_enter lib 00:07:01.323 18:05:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:01.323 18:05:26 -- common/autotest_common.sh@10 -- # set +x 00:07:01.323 18:05:26 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:07:01.323 18:05:26 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:01.323 18:05:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:01.323 18:05:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.323 18:05:26 -- common/autotest_common.sh@10 -- # set +x 00:07:01.323 ************************************ 00:07:01.323 START TEST env 00:07:01.323 ************************************ 00:07:01.323 18:05:26 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:01.323 * Looking for test storage... 00:07:01.323 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:07:01.323 18:05:26 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:01.323 18:05:26 env -- common/autotest_common.sh@1711 -- # lcov --version 00:07:01.323 18:05:26 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:01.323 18:05:26 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:01.323 18:05:26 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.323 18:05:26 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.323 18:05:26 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.323 18:05:26 env -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.323 18:05:26 env -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.323 18:05:26 env -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.323 18:05:26 env -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.323 18:05:26 env -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.323 18:05:26 env -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.323 18:05:26 env -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.323 18:05:26 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.323 18:05:26 env -- scripts/common.sh@344 -- # case "$op" in 00:07:01.323 18:05:26 env -- scripts/common.sh@345 -- # : 1 00:07:01.323 18:05:26 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.323 18:05:26 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.323 18:05:26 env -- scripts/common.sh@365 -- # decimal 1 00:07:01.323 18:05:26 env -- scripts/common.sh@353 -- # local d=1 00:07:01.323 18:05:26 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.323 18:05:26 env -- scripts/common.sh@355 -- # echo 1 00:07:01.323 18:05:26 env -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.324 18:05:26 env -- scripts/common.sh@366 -- # decimal 2 00:07:01.324 18:05:26 env -- scripts/common.sh@353 -- # local d=2 00:07:01.324 18:05:26 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.324 18:05:26 env -- scripts/common.sh@355 -- # echo 2 00:07:01.324 18:05:26 env -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.324 18:05:26 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.324 18:05:26 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.324 18:05:26 env -- scripts/common.sh@368 -- # return 0 00:07:01.324 18:05:26 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.324 18:05:26 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:01.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.324 --rc genhtml_branch_coverage=1 00:07:01.324 --rc genhtml_function_coverage=1 00:07:01.324 --rc genhtml_legend=1 00:07:01.324 --rc geninfo_all_blocks=1 00:07:01.324 --rc geninfo_unexecuted_blocks=1 00:07:01.324 00:07:01.324 ' 00:07:01.324 18:05:26 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:01.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.324 --rc genhtml_branch_coverage=1 00:07:01.324 --rc genhtml_function_coverage=1 00:07:01.324 --rc genhtml_legend=1 00:07:01.324 --rc geninfo_all_blocks=1 00:07:01.324 --rc geninfo_unexecuted_blocks=1 00:07:01.324 00:07:01.324 ' 00:07:01.324 18:05:26 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:01.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.324 --rc genhtml_branch_coverage=1 00:07:01.324 --rc genhtml_function_coverage=1 00:07:01.324 --rc genhtml_legend=1 00:07:01.324 --rc geninfo_all_blocks=1 00:07:01.324 --rc geninfo_unexecuted_blocks=1 00:07:01.324 00:07:01.324 ' 00:07:01.324 18:05:26 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:01.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.324 --rc genhtml_branch_coverage=1 00:07:01.324 --rc genhtml_function_coverage=1 00:07:01.324 --rc genhtml_legend=1 00:07:01.324 --rc geninfo_all_blocks=1 00:07:01.324 --rc geninfo_unexecuted_blocks=1 00:07:01.324 00:07:01.324 ' 00:07:01.324 18:05:26 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:01.324 18:05:26 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:01.324 18:05:26 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.324 18:05:26 env -- common/autotest_common.sh@10 -- # set +x 00:07:01.584 ************************************ 00:07:01.584 START TEST env_memory 00:07:01.584 ************************************ 00:07:01.584 18:05:26 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:01.584 00:07:01.584 00:07:01.584 CUnit - A unit testing framework for C - Version 2.1-3 00:07:01.584 http://cunit.sourceforge.net/ 00:07:01.584 00:07:01.584 00:07:01.584 Suite: memory 00:07:01.584 Test: alloc and free memory map ...[2024-12-06 18:05:26.941221] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:01.584 passed 00:07:01.584 Test: mem map translation ...[2024-12-06 18:05:27.027524] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:01.584 [2024-12-06 18:05:27.027666] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:01.584 [2024-12-06 18:05:27.027783] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:01.584 [2024-12-06 18:05:27.027827] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:01.844 passed 00:07:01.844 Test: mem map registration ...[2024-12-06 18:05:27.126871] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:07:01.844 [2024-12-06 18:05:27.126983] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:07:01.844 passed 00:07:01.844 Test: mem map adjacent registrations ...passed 00:07:01.844 00:07:01.844 Run Summary: Type Total Ran Passed Failed Inactive 00:07:01.844 suites 1 1 n/a 0 0 00:07:01.844 tests 4 4 4 0 0 00:07:01.844 asserts 152 152 152 0 n/a 00:07:01.844 00:07:01.844 Elapsed time = 0.389 seconds 00:07:01.844 00:07:01.844 real 0m0.432s 00:07:01.844 user 0m0.394s 00:07:01.844 sys 0m0.028s 00:07:01.844 18:05:27 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.844 ************************************ 00:07:01.844 END TEST env_memory 00:07:01.844 ************************************ 00:07:01.844 18:05:27 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:07:01.844 18:05:27 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:01.844 18:05:27 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:01.844 18:05:27 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.844 18:05:27 env -- common/autotest_common.sh@10 -- # set +x 00:07:01.844 ************************************ 00:07:01.844 START TEST env_vtophys 00:07:01.844 ************************************ 00:07:01.844 18:05:27 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:02.103 EAL: lib.eal log level changed from notice to debug 00:07:02.103 EAL: Detected lcore 0 as core 0 on socket 0 00:07:02.103 EAL: Detected lcore 1 as core 0 on socket 0 00:07:02.103 EAL: Detected lcore 2 as core 0 on socket 0 00:07:02.103 EAL: Detected lcore 3 as core 0 on socket 0 00:07:02.103 EAL: Detected lcore 4 as core 0 on socket 0 00:07:02.103 EAL: Detected lcore 5 as core 0 on socket 0 00:07:02.103 EAL: Detected lcore 6 as core 0 on socket 0 00:07:02.103 EAL: Detected lcore 7 as core 0 on socket 0 00:07:02.103 EAL: Detected lcore 8 as core 0 on socket 0 00:07:02.103 EAL: Detected lcore 9 as core 0 on socket 0 00:07:02.103 EAL: Maximum logical cores by configuration: 128 00:07:02.103 EAL: Detected CPU lcores: 10 00:07:02.103 EAL: Detected NUMA nodes: 1 00:07:02.103 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:07:02.103 EAL: Detected shared linkage of DPDK 00:07:02.103 EAL: No shared files mode enabled, IPC will be disabled 00:07:02.103 EAL: Selected IOVA mode 'PA' 00:07:02.103 EAL: Probing VFIO support... 00:07:02.103 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:02.103 EAL: VFIO modules not loaded, skipping VFIO support... 00:07:02.103 EAL: Ask a virtual area of 0x2e000 bytes 00:07:02.103 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:02.103 EAL: Setting up physically contiguous memory... 00:07:02.103 EAL: Setting maximum number of open files to 524288 00:07:02.103 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:02.103 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:02.103 EAL: Ask a virtual area of 0x61000 bytes 00:07:02.103 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:02.103 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:02.103 EAL: Ask a virtual area of 0x400000000 bytes 00:07:02.103 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:02.103 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:02.103 EAL: Ask a virtual area of 0x61000 bytes 00:07:02.103 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:02.103 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:02.103 EAL: Ask a virtual area of 0x400000000 bytes 00:07:02.103 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:02.103 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:02.103 EAL: Ask a virtual area of 0x61000 bytes 00:07:02.103 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:02.103 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:02.103 EAL: Ask a virtual area of 0x400000000 bytes 00:07:02.103 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:02.103 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:02.103 EAL: Ask a virtual area of 0x61000 bytes 00:07:02.103 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:02.103 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:02.103 EAL: Ask a virtual area of 0x400000000 bytes 00:07:02.103 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:02.103 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:02.103 EAL: Hugepages will be freed exactly as allocated. 00:07:02.103 EAL: No shared files mode enabled, IPC is disabled 00:07:02.103 EAL: No shared files mode enabled, IPC is disabled 00:07:02.103 EAL: TSC frequency is ~2200000 KHz 00:07:02.103 EAL: Main lcore 0 is ready (tid=7f651aed6a40;cpuset=[0]) 00:07:02.103 EAL: Trying to obtain current memory policy. 00:07:02.103 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:02.103 EAL: Restoring previous memory policy: 0 00:07:02.103 EAL: request: mp_malloc_sync 00:07:02.103 EAL: No shared files mode enabled, IPC is disabled 00:07:02.103 EAL: Heap on socket 0 was expanded by 2MB 00:07:02.103 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:02.103 EAL: No PCI address specified using 'addr=' in: bus=pci 00:07:02.103 EAL: Mem event callback 'spdk:(nil)' registered 00:07:02.103 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:07:02.103 00:07:02.103 00:07:02.103 CUnit - A unit testing framework for C - Version 2.1-3 00:07:02.103 http://cunit.sourceforge.net/ 00:07:02.103 00:07:02.103 00:07:02.103 Suite: components_suite 00:07:02.673 Test: vtophys_malloc_test ...passed 00:07:02.673 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:02.673 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:02.673 EAL: Restoring previous memory policy: 4 00:07:02.673 EAL: Calling mem event callback 'spdk:(nil)' 00:07:02.673 EAL: request: mp_malloc_sync 00:07:02.673 EAL: No shared files mode enabled, IPC is disabled 00:07:02.673 EAL: Heap on socket 0 was expanded by 4MB 00:07:02.673 EAL: Calling mem event callback 'spdk:(nil)' 00:07:02.673 EAL: request: mp_malloc_sync 00:07:02.673 EAL: No shared files mode enabled, IPC is disabled 00:07:02.673 EAL: Heap on socket 0 was shrunk by 4MB 00:07:02.673 EAL: Trying to obtain current memory policy. 00:07:02.673 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:02.673 EAL: Restoring previous memory policy: 4 00:07:02.673 EAL: Calling mem event callback 'spdk:(nil)' 00:07:02.673 EAL: request: mp_malloc_sync 00:07:02.673 EAL: No shared files mode enabled, IPC is disabled 00:07:02.673 EAL: Heap on socket 0 was expanded by 6MB 00:07:02.673 EAL: Calling mem event callback 'spdk:(nil)' 00:07:02.673 EAL: request: mp_malloc_sync 00:07:02.673 EAL: No shared files mode enabled, IPC is disabled 00:07:02.673 EAL: Heap on socket 0 was shrunk by 6MB 00:07:02.673 EAL: Trying to obtain current memory policy. 00:07:02.673 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:02.673 EAL: Restoring previous memory policy: 4 00:07:02.673 EAL: Calling mem event callback 'spdk:(nil)' 00:07:02.673 EAL: request: mp_malloc_sync 00:07:02.673 EAL: No shared files mode enabled, IPC is disabled 00:07:02.673 EAL: Heap on socket 0 was expanded by 10MB 00:07:02.673 EAL: Calling mem event callback 'spdk:(nil)' 00:07:02.673 EAL: request: mp_malloc_sync 00:07:02.673 EAL: No shared files mode enabled, IPC is disabled 00:07:02.673 EAL: Heap on socket 0 was shrunk by 10MB 00:07:02.673 EAL: Trying to obtain current memory policy. 00:07:02.673 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:02.673 EAL: Restoring previous memory policy: 4 00:07:02.673 EAL: Calling mem event callback 'spdk:(nil)' 00:07:02.673 EAL: request: mp_malloc_sync 00:07:02.673 EAL: No shared files mode enabled, IPC is disabled 00:07:02.673 EAL: Heap on socket 0 was expanded by 18MB 00:07:03.011 EAL: Calling mem event callback 'spdk:(nil)' 00:07:03.011 EAL: request: mp_malloc_sync 00:07:03.011 EAL: No shared files mode enabled, IPC is disabled 00:07:03.011 EAL: Heap on socket 0 was shrunk by 18MB 00:07:03.011 EAL: Trying to obtain current memory policy. 00:07:03.011 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:03.011 EAL: Restoring previous memory policy: 4 00:07:03.011 EAL: Calling mem event callback 'spdk:(nil)' 00:07:03.011 EAL: request: mp_malloc_sync 00:07:03.011 EAL: No shared files mode enabled, IPC is disabled 00:07:03.011 EAL: Heap on socket 0 was expanded by 34MB 00:07:03.011 EAL: Calling mem event callback 'spdk:(nil)' 00:07:03.011 EAL: request: mp_malloc_sync 00:07:03.011 EAL: No shared files mode enabled, IPC is disabled 00:07:03.011 EAL: Heap on socket 0 was shrunk by 34MB 00:07:03.011 EAL: Trying to obtain current memory policy. 00:07:03.011 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:03.011 EAL: Restoring previous memory policy: 4 00:07:03.011 EAL: Calling mem event callback 'spdk:(nil)' 00:07:03.011 EAL: request: mp_malloc_sync 00:07:03.011 EAL: No shared files mode enabled, IPC is disabled 00:07:03.011 EAL: Heap on socket 0 was expanded by 66MB 00:07:03.011 EAL: Calling mem event callback 'spdk:(nil)' 00:07:03.011 EAL: request: mp_malloc_sync 00:07:03.011 EAL: No shared files mode enabled, IPC is disabled 00:07:03.012 EAL: Heap on socket 0 was shrunk by 66MB 00:07:03.271 EAL: Trying to obtain current memory policy. 00:07:03.271 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:03.271 EAL: Restoring previous memory policy: 4 00:07:03.271 EAL: Calling mem event callback 'spdk:(nil)' 00:07:03.271 EAL: request: mp_malloc_sync 00:07:03.271 EAL: No shared files mode enabled, IPC is disabled 00:07:03.271 EAL: Heap on socket 0 was expanded by 130MB 00:07:03.530 EAL: Calling mem event callback 'spdk:(nil)' 00:07:03.530 EAL: request: mp_malloc_sync 00:07:03.530 EAL: No shared files mode enabled, IPC is disabled 00:07:03.530 EAL: Heap on socket 0 was shrunk by 130MB 00:07:03.530 EAL: Trying to obtain current memory policy. 00:07:03.530 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:03.788 EAL: Restoring previous memory policy: 4 00:07:03.788 EAL: Calling mem event callback 'spdk:(nil)' 00:07:03.788 EAL: request: mp_malloc_sync 00:07:03.788 EAL: No shared files mode enabled, IPC is disabled 00:07:03.788 EAL: Heap on socket 0 was expanded by 258MB 00:07:04.046 EAL: Calling mem event callback 'spdk:(nil)' 00:07:04.046 EAL: request: mp_malloc_sync 00:07:04.046 EAL: No shared files mode enabled, IPC is disabled 00:07:04.046 EAL: Heap on socket 0 was shrunk by 258MB 00:07:04.613 EAL: Trying to obtain current memory policy. 00:07:04.613 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:04.613 EAL: Restoring previous memory policy: 4 00:07:04.613 EAL: Calling mem event callback 'spdk:(nil)' 00:07:04.613 EAL: request: mp_malloc_sync 00:07:04.613 EAL: No shared files mode enabled, IPC is disabled 00:07:04.613 EAL: Heap on socket 0 was expanded by 514MB 00:07:05.546 EAL: Calling mem event callback 'spdk:(nil)' 00:07:05.546 EAL: request: mp_malloc_sync 00:07:05.546 EAL: No shared files mode enabled, IPC is disabled 00:07:05.546 EAL: Heap on socket 0 was shrunk by 514MB 00:07:06.482 EAL: Trying to obtain current memory policy. 00:07:06.482 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:06.739 EAL: Restoring previous memory policy: 4 00:07:06.739 EAL: Calling mem event callback 'spdk:(nil)' 00:07:06.739 EAL: request: mp_malloc_sync 00:07:06.739 EAL: No shared files mode enabled, IPC is disabled 00:07:06.739 EAL: Heap on socket 0 was expanded by 1026MB 00:07:08.665 EAL: Calling mem event callback 'spdk:(nil)' 00:07:08.665 EAL: request: mp_malloc_sync 00:07:08.665 EAL: No shared files mode enabled, IPC is disabled 00:07:08.665 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:10.042 passed 00:07:10.042 00:07:10.042 Run Summary: Type Total Ran Passed Failed Inactive 00:07:10.042 suites 1 1 n/a 0 0 00:07:10.042 tests 2 2 2 0 0 00:07:10.042 asserts 5747 5747 5747 0 n/a 00:07:10.042 00:07:10.042 Elapsed time = 7.749 seconds 00:07:10.042 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.042 EAL: request: mp_malloc_sync 00:07:10.042 EAL: No shared files mode enabled, IPC is disabled 00:07:10.042 EAL: Heap on socket 0 was shrunk by 2MB 00:07:10.042 EAL: No shared files mode enabled, IPC is disabled 00:07:10.042 EAL: No shared files mode enabled, IPC is disabled 00:07:10.042 EAL: No shared files mode enabled, IPC is disabled 00:07:10.042 00:07:10.042 real 0m8.089s 00:07:10.042 user 0m6.822s 00:07:10.042 sys 0m1.096s 00:07:10.042 18:05:35 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.042 18:05:35 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:07:10.042 ************************************ 00:07:10.042 END TEST env_vtophys 00:07:10.042 ************************************ 00:07:10.042 18:05:35 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:10.042 18:05:35 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:10.042 18:05:35 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.042 18:05:35 env -- common/autotest_common.sh@10 -- # set +x 00:07:10.042 ************************************ 00:07:10.042 START TEST env_pci 00:07:10.042 ************************************ 00:07:10.042 18:05:35 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:10.042 00:07:10.042 00:07:10.042 CUnit - A unit testing framework for C - Version 2.1-3 00:07:10.042 http://cunit.sourceforge.net/ 00:07:10.042 00:07:10.042 00:07:10.042 Suite: pci 00:07:10.042 Test: pci_hook ...[2024-12-06 18:05:35.498208] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56768 has claimed it 00:07:10.042 passed 00:07:10.042 00:07:10.042 Run Summary: Type Total Ran Passed Failed Inactive 00:07:10.042 suites 1 1 n/a 0 0 00:07:10.042 tests 1 1 1 0 0 00:07:10.042 asserts 25 25 25 0 n/a 00:07:10.042 00:07:10.042 Elapsed time = 0.007EAL: Cannot find device (10000:00:01.0) 00:07:10.042 EAL: Failed to attach device on primary process 00:07:10.042 seconds 00:07:10.042 00:07:10.042 real 0m0.071s 00:07:10.042 user 0m0.033s 00:07:10.042 sys 0m0.037s 00:07:10.042 18:05:35 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.042 18:05:35 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:07:10.042 ************************************ 00:07:10.042 END TEST env_pci 00:07:10.042 ************************************ 00:07:10.301 18:05:35 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:10.301 18:05:35 env -- env/env.sh@15 -- # uname 00:07:10.301 18:05:35 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:10.301 18:05:35 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:10.301 18:05:35 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:10.301 18:05:35 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:10.301 18:05:35 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.301 18:05:35 env -- common/autotest_common.sh@10 -- # set +x 00:07:10.301 ************************************ 00:07:10.301 START TEST env_dpdk_post_init 00:07:10.301 ************************************ 00:07:10.301 18:05:35 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:10.301 EAL: Detected CPU lcores: 10 00:07:10.301 EAL: Detected NUMA nodes: 1 00:07:10.301 EAL: Detected shared linkage of DPDK 00:07:10.301 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:10.301 EAL: Selected IOVA mode 'PA' 00:07:10.301 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:10.559 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:07:10.559 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:07:10.559 Starting DPDK initialization... 00:07:10.559 Starting SPDK post initialization... 00:07:10.559 SPDK NVMe probe 00:07:10.559 Attaching to 0000:00:10.0 00:07:10.559 Attaching to 0000:00:11.0 00:07:10.559 Attached to 0000:00:10.0 00:07:10.559 Attached to 0000:00:11.0 00:07:10.559 Cleaning up... 00:07:10.559 00:07:10.559 real 0m0.315s 00:07:10.559 user 0m0.116s 00:07:10.559 sys 0m0.099s 00:07:10.559 18:05:35 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.559 18:05:35 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:07:10.559 ************************************ 00:07:10.559 END TEST env_dpdk_post_init 00:07:10.559 ************************************ 00:07:10.559 18:05:35 env -- env/env.sh@26 -- # uname 00:07:10.559 18:05:35 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:10.559 18:05:35 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:10.559 18:05:35 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:10.559 18:05:35 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.559 18:05:35 env -- common/autotest_common.sh@10 -- # set +x 00:07:10.559 ************************************ 00:07:10.559 START TEST env_mem_callbacks 00:07:10.559 ************************************ 00:07:10.559 18:05:35 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:10.559 EAL: Detected CPU lcores: 10 00:07:10.559 EAL: Detected NUMA nodes: 1 00:07:10.559 EAL: Detected shared linkage of DPDK 00:07:10.559 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:10.559 EAL: Selected IOVA mode 'PA' 00:07:10.817 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:10.817 00:07:10.817 00:07:10.817 CUnit - A unit testing framework for C - Version 2.1-3 00:07:10.817 http://cunit.sourceforge.net/ 00:07:10.817 00:07:10.817 00:07:10.817 Suite: memory 00:07:10.817 Test: test ... 00:07:10.817 register 0x200000200000 2097152 00:07:10.817 malloc 3145728 00:07:10.817 register 0x200000400000 4194304 00:07:10.817 buf 0x2000004fffc0 len 3145728 PASSED 00:07:10.817 malloc 64 00:07:10.817 buf 0x2000004ffec0 len 64 PASSED 00:07:10.817 malloc 4194304 00:07:10.817 register 0x200000800000 6291456 00:07:10.817 buf 0x2000009fffc0 len 4194304 PASSED 00:07:10.817 free 0x2000004fffc0 3145728 00:07:10.817 free 0x2000004ffec0 64 00:07:10.817 unregister 0x200000400000 4194304 PASSED 00:07:10.817 free 0x2000009fffc0 4194304 00:07:10.817 unregister 0x200000800000 6291456 PASSED 00:07:10.817 malloc 8388608 00:07:10.817 register 0x200000400000 10485760 00:07:10.817 buf 0x2000005fffc0 len 8388608 PASSED 00:07:10.817 free 0x2000005fffc0 8388608 00:07:10.817 unregister 0x200000400000 10485760 PASSED 00:07:10.817 passed 00:07:10.817 00:07:10.817 Run Summary: Type Total Ran Passed Failed Inactive 00:07:10.817 suites 1 1 n/a 0 0 00:07:10.817 tests 1 1 1 0 0 00:07:10.817 asserts 15 15 15 0 n/a 00:07:10.817 00:07:10.817 Elapsed time = 0.073 seconds 00:07:10.817 00:07:10.817 real 0m0.269s 00:07:10.817 user 0m0.098s 00:07:10.817 sys 0m0.068s 00:07:10.817 18:05:36 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.817 18:05:36 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:07:10.817 ************************************ 00:07:10.817 END TEST env_mem_callbacks 00:07:10.817 ************************************ 00:07:10.817 00:07:10.817 real 0m9.642s 00:07:10.817 user 0m7.673s 00:07:10.817 sys 0m1.568s 00:07:10.817 18:05:36 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.817 18:05:36 env -- common/autotest_common.sh@10 -- # set +x 00:07:10.817 ************************************ 00:07:10.817 END TEST env 00:07:10.817 ************************************ 00:07:10.817 18:05:36 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:10.817 18:05:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:10.817 18:05:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.817 18:05:36 -- common/autotest_common.sh@10 -- # set +x 00:07:10.817 ************************************ 00:07:10.817 START TEST rpc 00:07:10.817 ************************************ 00:07:10.817 18:05:36 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:11.076 * Looking for test storage... 00:07:11.076 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:11.076 18:05:36 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:11.076 18:05:36 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:07:11.076 18:05:36 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:11.076 18:05:36 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:11.076 18:05:36 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:11.076 18:05:36 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:11.076 18:05:36 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:11.076 18:05:36 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:11.076 18:05:36 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:11.076 18:05:36 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:11.076 18:05:36 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:11.076 18:05:36 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:11.076 18:05:36 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:11.076 18:05:36 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:11.076 18:05:36 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:11.076 18:05:36 rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:11.076 18:05:36 rpc -- scripts/common.sh@345 -- # : 1 00:07:11.076 18:05:36 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:11.076 18:05:36 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:11.076 18:05:36 rpc -- scripts/common.sh@365 -- # decimal 1 00:07:11.076 18:05:36 rpc -- scripts/common.sh@353 -- # local d=1 00:07:11.076 18:05:36 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:11.076 18:05:36 rpc -- scripts/common.sh@355 -- # echo 1 00:07:11.076 18:05:36 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:11.076 18:05:36 rpc -- scripts/common.sh@366 -- # decimal 2 00:07:11.076 18:05:36 rpc -- scripts/common.sh@353 -- # local d=2 00:07:11.076 18:05:36 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:11.076 18:05:36 rpc -- scripts/common.sh@355 -- # echo 2 00:07:11.076 18:05:36 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:11.076 18:05:36 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:11.076 18:05:36 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:11.076 18:05:36 rpc -- scripts/common.sh@368 -- # return 0 00:07:11.076 18:05:36 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:11.076 18:05:36 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:11.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.076 --rc genhtml_branch_coverage=1 00:07:11.076 --rc genhtml_function_coverage=1 00:07:11.076 --rc genhtml_legend=1 00:07:11.076 --rc geninfo_all_blocks=1 00:07:11.076 --rc geninfo_unexecuted_blocks=1 00:07:11.076 00:07:11.076 ' 00:07:11.076 18:05:36 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:11.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.076 --rc genhtml_branch_coverage=1 00:07:11.076 --rc genhtml_function_coverage=1 00:07:11.076 --rc genhtml_legend=1 00:07:11.076 --rc geninfo_all_blocks=1 00:07:11.076 --rc geninfo_unexecuted_blocks=1 00:07:11.076 00:07:11.076 ' 00:07:11.076 18:05:36 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:11.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.076 --rc genhtml_branch_coverage=1 00:07:11.076 --rc genhtml_function_coverage=1 00:07:11.076 --rc genhtml_legend=1 00:07:11.076 --rc geninfo_all_blocks=1 00:07:11.076 --rc geninfo_unexecuted_blocks=1 00:07:11.076 00:07:11.076 ' 00:07:11.076 18:05:36 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:11.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.076 --rc genhtml_branch_coverage=1 00:07:11.076 --rc genhtml_function_coverage=1 00:07:11.076 --rc genhtml_legend=1 00:07:11.076 --rc geninfo_all_blocks=1 00:07:11.076 --rc geninfo_unexecuted_blocks=1 00:07:11.076 00:07:11.076 ' 00:07:11.076 18:05:36 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56895 00:07:11.076 18:05:36 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:07:11.076 18:05:36 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:11.076 18:05:36 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56895 00:07:11.076 18:05:36 rpc -- common/autotest_common.sh@835 -- # '[' -z 56895 ']' 00:07:11.076 18:05:36 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.076 18:05:36 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.076 18:05:36 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.076 18:05:36 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.076 18:05:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.399 [2024-12-06 18:05:36.635469] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:07:11.399 [2024-12-06 18:05:36.635667] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56895 ] 00:07:11.399 [2024-12-06 18:05:36.828325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.685 [2024-12-06 18:05:36.980360] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:11.685 [2024-12-06 18:05:36.980473] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56895' to capture a snapshot of events at runtime. 00:07:11.685 [2024-12-06 18:05:36.980510] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:11.685 [2024-12-06 18:05:36.980546] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:11.685 [2024-12-06 18:05:36.980575] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56895 for offline analysis/debug. 00:07:11.685 [2024-12-06 18:05:36.982365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.627 18:05:37 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.627 18:05:37 rpc -- common/autotest_common.sh@868 -- # return 0 00:07:12.627 18:05:37 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:12.627 18:05:37 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:12.627 18:05:37 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:12.627 18:05:37 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:12.627 18:05:37 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:12.627 18:05:37 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.627 18:05:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.627 ************************************ 00:07:12.627 START TEST rpc_integrity 00:07:12.627 ************************************ 00:07:12.627 18:05:37 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:07:12.627 18:05:37 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:12.627 18:05:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.627 18:05:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:12.627 18:05:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.627 18:05:37 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:12.627 18:05:37 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:12.627 18:05:37 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:12.627 18:05:37 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:12.627 18:05:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.627 18:05:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:12.627 18:05:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.627 18:05:37 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:12.627 18:05:37 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:12.627 18:05:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.627 18:05:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:12.627 18:05:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.627 18:05:37 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:12.627 { 00:07:12.627 "name": "Malloc0", 00:07:12.627 "aliases": [ 00:07:12.627 "9989d5cc-1546-450e-b99e-20411c6a4818" 00:07:12.627 ], 00:07:12.627 "product_name": "Malloc disk", 00:07:12.627 "block_size": 512, 00:07:12.627 "num_blocks": 16384, 00:07:12.627 "uuid": "9989d5cc-1546-450e-b99e-20411c6a4818", 00:07:12.627 "assigned_rate_limits": { 00:07:12.627 "rw_ios_per_sec": 0, 00:07:12.627 "rw_mbytes_per_sec": 0, 00:07:12.627 "r_mbytes_per_sec": 0, 00:07:12.627 "w_mbytes_per_sec": 0 00:07:12.627 }, 00:07:12.627 "claimed": false, 00:07:12.627 "zoned": false, 00:07:12.627 "supported_io_types": { 00:07:12.627 "read": true, 00:07:12.627 "write": true, 00:07:12.627 "unmap": true, 00:07:12.627 "flush": true, 00:07:12.627 "reset": true, 00:07:12.627 "nvme_admin": false, 00:07:12.627 "nvme_io": false, 00:07:12.627 "nvme_io_md": false, 00:07:12.627 "write_zeroes": true, 00:07:12.627 "zcopy": true, 00:07:12.627 "get_zone_info": false, 00:07:12.627 "zone_management": false, 00:07:12.627 "zone_append": false, 00:07:12.627 "compare": false, 00:07:12.627 "compare_and_write": false, 00:07:12.627 "abort": true, 00:07:12.627 "seek_hole": false, 00:07:12.627 "seek_data": false, 00:07:12.627 "copy": true, 00:07:12.627 "nvme_iov_md": false 00:07:12.627 }, 00:07:12.627 "memory_domains": [ 00:07:12.627 { 00:07:12.627 "dma_device_id": "system", 00:07:12.627 "dma_device_type": 1 00:07:12.627 }, 00:07:12.627 { 00:07:12.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:12.627 "dma_device_type": 2 00:07:12.627 } 00:07:12.627 ], 00:07:12.627 "driver_specific": {} 00:07:12.627 } 00:07:12.627 ]' 00:07:12.627 18:05:37 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:12.627 18:05:38 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:12.627 18:05:38 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:12.627 18:05:38 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.627 18:05:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:12.627 [2024-12-06 18:05:38.049282] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:12.627 [2024-12-06 18:05:38.049391] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:12.627 [2024-12-06 18:05:38.049459] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:12.627 [2024-12-06 18:05:38.049523] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:12.627 [2024-12-06 18:05:38.052640] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:12.627 [2024-12-06 18:05:38.052726] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:12.627 Passthru0 00:07:12.627 18:05:38 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.627 18:05:38 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:12.627 18:05:38 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.627 18:05:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:12.627 18:05:38 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.627 18:05:38 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:12.627 { 00:07:12.627 "name": "Malloc0", 00:07:12.627 "aliases": [ 00:07:12.627 "9989d5cc-1546-450e-b99e-20411c6a4818" 00:07:12.627 ], 00:07:12.627 "product_name": "Malloc disk", 00:07:12.627 "block_size": 512, 00:07:12.627 "num_blocks": 16384, 00:07:12.627 "uuid": "9989d5cc-1546-450e-b99e-20411c6a4818", 00:07:12.627 "assigned_rate_limits": { 00:07:12.627 "rw_ios_per_sec": 0, 00:07:12.627 "rw_mbytes_per_sec": 0, 00:07:12.627 "r_mbytes_per_sec": 0, 00:07:12.627 "w_mbytes_per_sec": 0 00:07:12.627 }, 00:07:12.627 "claimed": true, 00:07:12.627 "claim_type": "exclusive_write", 00:07:12.627 "zoned": false, 00:07:12.627 "supported_io_types": { 00:07:12.627 "read": true, 00:07:12.627 "write": true, 00:07:12.627 "unmap": true, 00:07:12.627 "flush": true, 00:07:12.627 "reset": true, 00:07:12.627 "nvme_admin": false, 00:07:12.627 "nvme_io": false, 00:07:12.627 "nvme_io_md": false, 00:07:12.627 "write_zeroes": true, 00:07:12.627 "zcopy": true, 00:07:12.627 "get_zone_info": false, 00:07:12.627 "zone_management": false, 00:07:12.627 "zone_append": false, 00:07:12.627 "compare": false, 00:07:12.627 "compare_and_write": false, 00:07:12.627 "abort": true, 00:07:12.627 "seek_hole": false, 00:07:12.627 "seek_data": false, 00:07:12.627 "copy": true, 00:07:12.627 "nvme_iov_md": false 00:07:12.627 }, 00:07:12.627 "memory_domains": [ 00:07:12.627 { 00:07:12.627 "dma_device_id": "system", 00:07:12.628 "dma_device_type": 1 00:07:12.628 }, 00:07:12.628 { 00:07:12.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:12.628 "dma_device_type": 2 00:07:12.628 } 00:07:12.628 ], 00:07:12.628 "driver_specific": {} 00:07:12.628 }, 00:07:12.628 { 00:07:12.628 "name": "Passthru0", 00:07:12.628 "aliases": [ 00:07:12.628 "31117008-b0b4-505a-a8a3-08669ba526b5" 00:07:12.628 ], 00:07:12.628 "product_name": "passthru", 00:07:12.628 "block_size": 512, 00:07:12.628 "num_blocks": 16384, 00:07:12.628 "uuid": "31117008-b0b4-505a-a8a3-08669ba526b5", 00:07:12.628 "assigned_rate_limits": { 00:07:12.628 "rw_ios_per_sec": 0, 00:07:12.628 "rw_mbytes_per_sec": 0, 00:07:12.628 "r_mbytes_per_sec": 0, 00:07:12.628 "w_mbytes_per_sec": 0 00:07:12.628 }, 00:07:12.628 "claimed": false, 00:07:12.628 "zoned": false, 00:07:12.628 "supported_io_types": { 00:07:12.628 "read": true, 00:07:12.628 "write": true, 00:07:12.628 "unmap": true, 00:07:12.628 "flush": true, 00:07:12.628 "reset": true, 00:07:12.628 "nvme_admin": false, 00:07:12.628 "nvme_io": false, 00:07:12.628 "nvme_io_md": false, 00:07:12.628 "write_zeroes": true, 00:07:12.628 "zcopy": true, 00:07:12.628 "get_zone_info": false, 00:07:12.628 "zone_management": false, 00:07:12.628 "zone_append": false, 00:07:12.628 "compare": false, 00:07:12.628 "compare_and_write": false, 00:07:12.628 "abort": true, 00:07:12.628 "seek_hole": false, 00:07:12.628 "seek_data": false, 00:07:12.628 "copy": true, 00:07:12.628 "nvme_iov_md": false 00:07:12.628 }, 00:07:12.628 "memory_domains": [ 00:07:12.628 { 00:07:12.628 "dma_device_id": "system", 00:07:12.628 "dma_device_type": 1 00:07:12.628 }, 00:07:12.628 { 00:07:12.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:12.628 "dma_device_type": 2 00:07:12.628 } 00:07:12.628 ], 00:07:12.628 "driver_specific": { 00:07:12.628 "passthru": { 00:07:12.628 "name": "Passthru0", 00:07:12.628 "base_bdev_name": "Malloc0" 00:07:12.628 } 00:07:12.628 } 00:07:12.628 } 00:07:12.628 ]' 00:07:12.628 18:05:38 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:12.628 18:05:38 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:12.628 18:05:38 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:12.628 18:05:38 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.628 18:05:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:12.628 18:05:38 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.628 18:05:38 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:12.628 18:05:38 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.628 18:05:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:12.887 18:05:38 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.887 18:05:38 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:12.887 18:05:38 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.887 18:05:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:12.887 18:05:38 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.887 18:05:38 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:12.887 18:05:38 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:12.887 18:05:38 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:12.887 00:07:12.887 real 0m0.344s 00:07:12.887 user 0m0.197s 00:07:12.887 sys 0m0.044s 00:07:12.887 18:05:38 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.887 18:05:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:12.887 ************************************ 00:07:12.887 END TEST rpc_integrity 00:07:12.887 ************************************ 00:07:12.887 18:05:38 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:12.887 18:05:38 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:12.887 18:05:38 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.887 18:05:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.887 ************************************ 00:07:12.887 START TEST rpc_plugins 00:07:12.887 ************************************ 00:07:12.887 18:05:38 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:07:12.887 18:05:38 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:12.887 18:05:38 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.887 18:05:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:12.887 18:05:38 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.887 18:05:38 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:12.887 18:05:38 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:12.887 18:05:38 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.887 18:05:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:12.887 18:05:38 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.887 18:05:38 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:12.887 { 00:07:12.887 "name": "Malloc1", 00:07:12.887 "aliases": [ 00:07:12.887 "0f966047-35de-405f-a60a-fd371785137f" 00:07:12.887 ], 00:07:12.887 "product_name": "Malloc disk", 00:07:12.887 "block_size": 4096, 00:07:12.887 "num_blocks": 256, 00:07:12.887 "uuid": "0f966047-35de-405f-a60a-fd371785137f", 00:07:12.887 "assigned_rate_limits": { 00:07:12.887 "rw_ios_per_sec": 0, 00:07:12.887 "rw_mbytes_per_sec": 0, 00:07:12.887 "r_mbytes_per_sec": 0, 00:07:12.887 "w_mbytes_per_sec": 0 00:07:12.887 }, 00:07:12.887 "claimed": false, 00:07:12.887 "zoned": false, 00:07:12.887 "supported_io_types": { 00:07:12.887 "read": true, 00:07:12.887 "write": true, 00:07:12.887 "unmap": true, 00:07:12.887 "flush": true, 00:07:12.887 "reset": true, 00:07:12.887 "nvme_admin": false, 00:07:12.887 "nvme_io": false, 00:07:12.887 "nvme_io_md": false, 00:07:12.887 "write_zeroes": true, 00:07:12.887 "zcopy": true, 00:07:12.887 "get_zone_info": false, 00:07:12.887 "zone_management": false, 00:07:12.887 "zone_append": false, 00:07:12.887 "compare": false, 00:07:12.887 "compare_and_write": false, 00:07:12.887 "abort": true, 00:07:12.887 "seek_hole": false, 00:07:12.887 "seek_data": false, 00:07:12.887 "copy": true, 00:07:12.887 "nvme_iov_md": false 00:07:12.887 }, 00:07:12.887 "memory_domains": [ 00:07:12.887 { 00:07:12.887 "dma_device_id": "system", 00:07:12.887 "dma_device_type": 1 00:07:12.887 }, 00:07:12.887 { 00:07:12.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:12.887 "dma_device_type": 2 00:07:12.887 } 00:07:12.888 ], 00:07:12.888 "driver_specific": {} 00:07:12.888 } 00:07:12.888 ]' 00:07:12.888 18:05:38 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:07:12.888 18:05:38 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:12.888 18:05:38 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:12.888 18:05:38 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.888 18:05:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:12.888 18:05:38 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.888 18:05:38 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:12.888 18:05:38 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.888 18:05:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:12.888 18:05:38 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.888 18:05:38 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:12.888 18:05:38 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:07:13.147 18:05:38 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:13.147 00:07:13.147 real 0m0.160s 00:07:13.147 user 0m0.104s 00:07:13.147 sys 0m0.017s 00:07:13.147 18:05:38 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.147 18:05:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:13.147 ************************************ 00:07:13.147 END TEST rpc_plugins 00:07:13.147 ************************************ 00:07:13.147 18:05:38 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:13.147 18:05:38 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:13.147 18:05:38 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.147 18:05:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.147 ************************************ 00:07:13.147 START TEST rpc_trace_cmd_test 00:07:13.147 ************************************ 00:07:13.147 18:05:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:07:13.147 18:05:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:07:13.147 18:05:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:13.147 18:05:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.147 18:05:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.147 18:05:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.147 18:05:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:07:13.147 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56895", 00:07:13.147 "tpoint_group_mask": "0x8", 00:07:13.147 "iscsi_conn": { 00:07:13.147 "mask": "0x2", 00:07:13.147 "tpoint_mask": "0x0" 00:07:13.147 }, 00:07:13.147 "scsi": { 00:07:13.147 "mask": "0x4", 00:07:13.147 "tpoint_mask": "0x0" 00:07:13.147 }, 00:07:13.147 "bdev": { 00:07:13.147 "mask": "0x8", 00:07:13.147 "tpoint_mask": "0xffffffffffffffff" 00:07:13.147 }, 00:07:13.147 "nvmf_rdma": { 00:07:13.147 "mask": "0x10", 00:07:13.147 "tpoint_mask": "0x0" 00:07:13.147 }, 00:07:13.147 "nvmf_tcp": { 00:07:13.147 "mask": "0x20", 00:07:13.147 "tpoint_mask": "0x0" 00:07:13.147 }, 00:07:13.147 "ftl": { 00:07:13.147 "mask": "0x40", 00:07:13.147 "tpoint_mask": "0x0" 00:07:13.147 }, 00:07:13.147 "blobfs": { 00:07:13.147 "mask": "0x80", 00:07:13.147 "tpoint_mask": "0x0" 00:07:13.147 }, 00:07:13.147 "dsa": { 00:07:13.147 "mask": "0x200", 00:07:13.147 "tpoint_mask": "0x0" 00:07:13.147 }, 00:07:13.147 "thread": { 00:07:13.147 "mask": "0x400", 00:07:13.147 "tpoint_mask": "0x0" 00:07:13.147 }, 00:07:13.147 "nvme_pcie": { 00:07:13.147 "mask": "0x800", 00:07:13.147 "tpoint_mask": "0x0" 00:07:13.147 }, 00:07:13.147 "iaa": { 00:07:13.147 "mask": "0x1000", 00:07:13.147 "tpoint_mask": "0x0" 00:07:13.147 }, 00:07:13.147 "nvme_tcp": { 00:07:13.147 "mask": "0x2000", 00:07:13.147 "tpoint_mask": "0x0" 00:07:13.147 }, 00:07:13.147 "bdev_nvme": { 00:07:13.147 "mask": "0x4000", 00:07:13.147 "tpoint_mask": "0x0" 00:07:13.147 }, 00:07:13.147 "sock": { 00:07:13.147 "mask": "0x8000", 00:07:13.147 "tpoint_mask": "0x0" 00:07:13.147 }, 00:07:13.147 "blob": { 00:07:13.147 "mask": "0x10000", 00:07:13.147 "tpoint_mask": "0x0" 00:07:13.147 }, 00:07:13.147 "bdev_raid": { 00:07:13.147 "mask": "0x20000", 00:07:13.147 "tpoint_mask": "0x0" 00:07:13.147 }, 00:07:13.147 "scheduler": { 00:07:13.147 "mask": "0x40000", 00:07:13.147 "tpoint_mask": "0x0" 00:07:13.147 } 00:07:13.147 }' 00:07:13.147 18:05:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:07:13.147 18:05:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:07:13.147 18:05:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:13.147 18:05:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:13.147 18:05:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:13.147 18:05:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:13.147 18:05:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:13.407 18:05:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:13.407 18:05:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:13.407 18:05:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:13.407 00:07:13.407 real 0m0.247s 00:07:13.407 user 0m0.216s 00:07:13.407 sys 0m0.020s 00:07:13.407 18:05:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.407 18:05:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.407 ************************************ 00:07:13.407 END TEST rpc_trace_cmd_test 00:07:13.407 ************************************ 00:07:13.407 18:05:38 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:13.407 18:05:38 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:13.407 18:05:38 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:13.407 18:05:38 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:13.407 18:05:38 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.407 18:05:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.407 ************************************ 00:07:13.407 START TEST rpc_daemon_integrity 00:07:13.407 ************************************ 00:07:13.407 18:05:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:07:13.407 18:05:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:13.407 18:05:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.407 18:05:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:13.407 18:05:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.407 18:05:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:13.407 18:05:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:13.407 18:05:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:13.407 18:05:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:13.407 18:05:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.407 18:05:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:13.407 18:05:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.407 18:05:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:13.407 18:05:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:13.407 18:05:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.407 18:05:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:13.407 18:05:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.407 18:05:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:13.407 { 00:07:13.407 "name": "Malloc2", 00:07:13.407 "aliases": [ 00:07:13.407 "c7e110ae-1c34-4361-b5e3-ed36c0126f4c" 00:07:13.407 ], 00:07:13.407 "product_name": "Malloc disk", 00:07:13.407 "block_size": 512, 00:07:13.407 "num_blocks": 16384, 00:07:13.407 "uuid": "c7e110ae-1c34-4361-b5e3-ed36c0126f4c", 00:07:13.407 "assigned_rate_limits": { 00:07:13.407 "rw_ios_per_sec": 0, 00:07:13.407 "rw_mbytes_per_sec": 0, 00:07:13.407 "r_mbytes_per_sec": 0, 00:07:13.407 "w_mbytes_per_sec": 0 00:07:13.407 }, 00:07:13.407 "claimed": false, 00:07:13.407 "zoned": false, 00:07:13.407 "supported_io_types": { 00:07:13.407 "read": true, 00:07:13.407 "write": true, 00:07:13.407 "unmap": true, 00:07:13.407 "flush": true, 00:07:13.407 "reset": true, 00:07:13.407 "nvme_admin": false, 00:07:13.407 "nvme_io": false, 00:07:13.407 "nvme_io_md": false, 00:07:13.407 "write_zeroes": true, 00:07:13.407 "zcopy": true, 00:07:13.407 "get_zone_info": false, 00:07:13.407 "zone_management": false, 00:07:13.407 "zone_append": false, 00:07:13.407 "compare": false, 00:07:13.407 "compare_and_write": false, 00:07:13.407 "abort": true, 00:07:13.407 "seek_hole": false, 00:07:13.407 "seek_data": false, 00:07:13.407 "copy": true, 00:07:13.407 "nvme_iov_md": false 00:07:13.407 }, 00:07:13.407 "memory_domains": [ 00:07:13.407 { 00:07:13.407 "dma_device_id": "system", 00:07:13.407 "dma_device_type": 1 00:07:13.407 }, 00:07:13.407 { 00:07:13.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:13.407 "dma_device_type": 2 00:07:13.407 } 00:07:13.407 ], 00:07:13.407 "driver_specific": {} 00:07:13.407 } 00:07:13.407 ]' 00:07:13.407 18:05:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:13.666 18:05:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:13.666 18:05:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:13.666 18:05:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.666 18:05:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:13.666 [2024-12-06 18:05:38.959581] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:13.666 [2024-12-06 18:05:38.959677] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:13.666 [2024-12-06 18:05:38.959732] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:13.666 [2024-12-06 18:05:38.959793] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:13.666 [2024-12-06 18:05:38.962814] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:13.666 [2024-12-06 18:05:38.962881] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:13.666 Passthru0 00:07:13.666 18:05:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.666 18:05:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:13.666 18:05:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.666 18:05:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:13.666 18:05:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.666 18:05:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:13.666 { 00:07:13.666 "name": "Malloc2", 00:07:13.666 "aliases": [ 00:07:13.666 "c7e110ae-1c34-4361-b5e3-ed36c0126f4c" 00:07:13.666 ], 00:07:13.666 "product_name": "Malloc disk", 00:07:13.666 "block_size": 512, 00:07:13.666 "num_blocks": 16384, 00:07:13.666 "uuid": "c7e110ae-1c34-4361-b5e3-ed36c0126f4c", 00:07:13.666 "assigned_rate_limits": { 00:07:13.666 "rw_ios_per_sec": 0, 00:07:13.666 "rw_mbytes_per_sec": 0, 00:07:13.666 "r_mbytes_per_sec": 0, 00:07:13.666 "w_mbytes_per_sec": 0 00:07:13.666 }, 00:07:13.666 "claimed": true, 00:07:13.666 "claim_type": "exclusive_write", 00:07:13.666 "zoned": false, 00:07:13.666 "supported_io_types": { 00:07:13.666 "read": true, 00:07:13.666 "write": true, 00:07:13.666 "unmap": true, 00:07:13.666 "flush": true, 00:07:13.666 "reset": true, 00:07:13.666 "nvme_admin": false, 00:07:13.666 "nvme_io": false, 00:07:13.666 "nvme_io_md": false, 00:07:13.666 "write_zeroes": true, 00:07:13.666 "zcopy": true, 00:07:13.666 "get_zone_info": false, 00:07:13.666 "zone_management": false, 00:07:13.666 "zone_append": false, 00:07:13.666 "compare": false, 00:07:13.666 "compare_and_write": false, 00:07:13.666 "abort": true, 00:07:13.666 "seek_hole": false, 00:07:13.666 "seek_data": false, 00:07:13.666 "copy": true, 00:07:13.666 "nvme_iov_md": false 00:07:13.666 }, 00:07:13.666 "memory_domains": [ 00:07:13.666 { 00:07:13.666 "dma_device_id": "system", 00:07:13.666 "dma_device_type": 1 00:07:13.666 }, 00:07:13.666 { 00:07:13.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:13.666 "dma_device_type": 2 00:07:13.666 } 00:07:13.666 ], 00:07:13.666 "driver_specific": {} 00:07:13.666 }, 00:07:13.666 { 00:07:13.666 "name": "Passthru0", 00:07:13.666 "aliases": [ 00:07:13.666 "f27d446e-c57e-55ee-8ca4-557b69392190" 00:07:13.666 ], 00:07:13.666 "product_name": "passthru", 00:07:13.666 "block_size": 512, 00:07:13.666 "num_blocks": 16384, 00:07:13.666 "uuid": "f27d446e-c57e-55ee-8ca4-557b69392190", 00:07:13.666 "assigned_rate_limits": { 00:07:13.666 "rw_ios_per_sec": 0, 00:07:13.666 "rw_mbytes_per_sec": 0, 00:07:13.666 "r_mbytes_per_sec": 0, 00:07:13.666 "w_mbytes_per_sec": 0 00:07:13.666 }, 00:07:13.666 "claimed": false, 00:07:13.666 "zoned": false, 00:07:13.666 "supported_io_types": { 00:07:13.666 "read": true, 00:07:13.666 "write": true, 00:07:13.666 "unmap": true, 00:07:13.666 "flush": true, 00:07:13.666 "reset": true, 00:07:13.666 "nvme_admin": false, 00:07:13.666 "nvme_io": false, 00:07:13.666 "nvme_io_md": false, 00:07:13.666 "write_zeroes": true, 00:07:13.666 "zcopy": true, 00:07:13.666 "get_zone_info": false, 00:07:13.666 "zone_management": false, 00:07:13.666 "zone_append": false, 00:07:13.666 "compare": false, 00:07:13.666 "compare_and_write": false, 00:07:13.666 "abort": true, 00:07:13.666 "seek_hole": false, 00:07:13.666 "seek_data": false, 00:07:13.666 "copy": true, 00:07:13.666 "nvme_iov_md": false 00:07:13.666 }, 00:07:13.666 "memory_domains": [ 00:07:13.666 { 00:07:13.666 "dma_device_id": "system", 00:07:13.666 "dma_device_type": 1 00:07:13.666 }, 00:07:13.666 { 00:07:13.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:13.666 "dma_device_type": 2 00:07:13.666 } 00:07:13.666 ], 00:07:13.666 "driver_specific": { 00:07:13.666 "passthru": { 00:07:13.666 "name": "Passthru0", 00:07:13.666 "base_bdev_name": "Malloc2" 00:07:13.666 } 00:07:13.666 } 00:07:13.666 } 00:07:13.666 ]' 00:07:13.666 18:05:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:13.667 18:05:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:13.667 18:05:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:13.667 18:05:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.667 18:05:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:13.667 18:05:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.667 18:05:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:13.667 18:05:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.667 18:05:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:13.667 18:05:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.667 18:05:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:13.667 18:05:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.667 18:05:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:13.667 18:05:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.667 18:05:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:13.667 18:05:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:13.667 18:05:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:13.667 00:07:13.667 real 0m0.345s 00:07:13.667 user 0m0.210s 00:07:13.667 sys 0m0.039s 00:07:13.667 18:05:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.667 18:05:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:13.667 ************************************ 00:07:13.667 END TEST rpc_daemon_integrity 00:07:13.667 ************************************ 00:07:13.667 18:05:39 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:13.667 18:05:39 rpc -- rpc/rpc.sh@84 -- # killprocess 56895 00:07:13.667 18:05:39 rpc -- common/autotest_common.sh@954 -- # '[' -z 56895 ']' 00:07:13.667 18:05:39 rpc -- common/autotest_common.sh@958 -- # kill -0 56895 00:07:13.667 18:05:39 rpc -- common/autotest_common.sh@959 -- # uname 00:07:13.926 18:05:39 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:13.926 18:05:39 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56895 00:07:13.926 18:05:39 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:13.926 18:05:39 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:13.926 killing process with pid 56895 00:07:13.926 18:05:39 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56895' 00:07:13.926 18:05:39 rpc -- common/autotest_common.sh@973 -- # kill 56895 00:07:13.926 18:05:39 rpc -- common/autotest_common.sh@978 -- # wait 56895 00:07:16.462 00:07:16.462 real 0m5.177s 00:07:16.462 user 0m5.739s 00:07:16.462 sys 0m0.932s 00:07:16.462 18:05:41 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.462 18:05:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.462 ************************************ 00:07:16.462 END TEST rpc 00:07:16.462 ************************************ 00:07:16.462 18:05:41 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:16.462 18:05:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:16.462 18:05:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.462 18:05:41 -- common/autotest_common.sh@10 -- # set +x 00:07:16.462 ************************************ 00:07:16.462 START TEST skip_rpc 00:07:16.462 ************************************ 00:07:16.462 18:05:41 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:16.462 * Looking for test storage... 00:07:16.462 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:16.462 18:05:41 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:16.462 18:05:41 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:07:16.462 18:05:41 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:16.462 18:05:41 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:16.462 18:05:41 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:16.462 18:05:41 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:16.462 18:05:41 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:16.462 18:05:41 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:16.462 18:05:41 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:16.462 18:05:41 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:16.462 18:05:41 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:16.462 18:05:41 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:16.462 18:05:41 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:16.462 18:05:41 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:16.462 18:05:41 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:16.462 18:05:41 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:16.462 18:05:41 skip_rpc -- scripts/common.sh@345 -- # : 1 00:07:16.462 18:05:41 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:16.462 18:05:41 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:16.462 18:05:41 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:16.462 18:05:41 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:07:16.462 18:05:41 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:16.462 18:05:41 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:07:16.462 18:05:41 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:16.462 18:05:41 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:16.462 18:05:41 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:07:16.462 18:05:41 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:16.462 18:05:41 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:07:16.462 18:05:41 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:16.462 18:05:41 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:16.462 18:05:41 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:16.462 18:05:41 skip_rpc -- scripts/common.sh@368 -- # return 0 00:07:16.462 18:05:41 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:16.462 18:05:41 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:16.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.462 --rc genhtml_branch_coverage=1 00:07:16.462 --rc genhtml_function_coverage=1 00:07:16.462 --rc genhtml_legend=1 00:07:16.462 --rc geninfo_all_blocks=1 00:07:16.462 --rc geninfo_unexecuted_blocks=1 00:07:16.462 00:07:16.462 ' 00:07:16.462 18:05:41 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:16.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.462 --rc genhtml_branch_coverage=1 00:07:16.462 --rc genhtml_function_coverage=1 00:07:16.462 --rc genhtml_legend=1 00:07:16.462 --rc geninfo_all_blocks=1 00:07:16.462 --rc geninfo_unexecuted_blocks=1 00:07:16.462 00:07:16.462 ' 00:07:16.462 18:05:41 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:16.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.462 --rc genhtml_branch_coverage=1 00:07:16.462 --rc genhtml_function_coverage=1 00:07:16.462 --rc genhtml_legend=1 00:07:16.462 --rc geninfo_all_blocks=1 00:07:16.462 --rc geninfo_unexecuted_blocks=1 00:07:16.462 00:07:16.462 ' 00:07:16.462 18:05:41 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:16.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.462 --rc genhtml_branch_coverage=1 00:07:16.462 --rc genhtml_function_coverage=1 00:07:16.462 --rc genhtml_legend=1 00:07:16.462 --rc geninfo_all_blocks=1 00:07:16.462 --rc geninfo_unexecuted_blocks=1 00:07:16.462 00:07:16.462 ' 00:07:16.462 18:05:41 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:16.462 18:05:41 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:16.462 18:05:41 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:07:16.462 18:05:41 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:16.462 18:05:41 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.462 18:05:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.462 ************************************ 00:07:16.462 START TEST skip_rpc 00:07:16.462 ************************************ 00:07:16.462 18:05:41 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:07:16.462 18:05:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57130 00:07:16.462 18:05:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:07:16.462 18:05:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:16.462 18:05:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:07:16.462 [2024-12-06 18:05:41.882981] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:07:16.462 [2024-12-06 18:05:41.883183] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57130 ] 00:07:16.722 [2024-12-06 18:05:42.068125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.722 [2024-12-06 18:05:42.200536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.993 18:05:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:21.993 18:05:46 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:21.993 18:05:46 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:21.993 18:05:46 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:21.993 18:05:46 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:21.994 18:05:46 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:21.994 18:05:46 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:21.994 18:05:46 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:07:21.994 18:05:46 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.994 18:05:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.994 18:05:46 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:21.994 18:05:46 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:21.994 18:05:46 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:21.994 18:05:46 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:21.994 18:05:46 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:21.994 18:05:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:21.994 18:05:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57130 00:07:21.994 18:05:46 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57130 ']' 00:07:21.994 18:05:46 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57130 00:07:21.994 18:05:46 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:07:21.994 18:05:46 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:21.994 18:05:46 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57130 00:07:21.994 18:05:46 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:21.994 killing process with pid 57130 00:07:21.994 18:05:46 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:21.994 18:05:46 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57130' 00:07:21.994 18:05:46 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57130 00:07:21.994 18:05:46 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57130 00:07:23.901 00:07:23.901 real 0m7.278s 00:07:23.901 user 0m6.719s 00:07:23.901 sys 0m0.452s 00:07:23.901 18:05:49 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.901 18:05:49 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.901 ************************************ 00:07:23.901 END TEST skip_rpc 00:07:23.901 ************************************ 00:07:23.901 18:05:49 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:23.901 18:05:49 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:23.901 18:05:49 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.901 18:05:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.901 ************************************ 00:07:23.901 START TEST skip_rpc_with_json 00:07:23.901 ************************************ 00:07:23.901 18:05:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:07:23.901 18:05:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:23.901 18:05:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57234 00:07:23.901 18:05:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:23.901 18:05:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57234 00:07:23.901 18:05:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57234 ']' 00:07:23.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.901 18:05:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.901 18:05:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:23.901 18:05:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:23.901 18:05:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.901 18:05:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:23.901 18:05:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:23.901 [2024-12-06 18:05:49.218249] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:07:23.901 [2024-12-06 18:05:49.218822] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57234 ] 00:07:23.901 [2024-12-06 18:05:49.407343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.160 [2024-12-06 18:05:49.535262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.183 18:05:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.183 18:05:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:07:25.183 18:05:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:25.183 18:05:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.183 18:05:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:25.183 [2024-12-06 18:05:50.401693] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:25.183 request: 00:07:25.183 { 00:07:25.183 "trtype": "tcp", 00:07:25.183 "method": "nvmf_get_transports", 00:07:25.183 "req_id": 1 00:07:25.183 } 00:07:25.183 Got JSON-RPC error response 00:07:25.183 response: 00:07:25.183 { 00:07:25.183 "code": -19, 00:07:25.183 "message": "No such device" 00:07:25.183 } 00:07:25.183 18:05:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:25.183 18:05:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:25.183 18:05:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.183 18:05:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:25.183 [2024-12-06 18:05:50.413888] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:25.183 18:05:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.183 18:05:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:25.183 18:05:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.183 18:05:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:25.183 18:05:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.183 18:05:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:25.183 { 00:07:25.183 "subsystems": [ 00:07:25.183 { 00:07:25.183 "subsystem": "fsdev", 00:07:25.183 "config": [ 00:07:25.183 { 00:07:25.183 "method": "fsdev_set_opts", 00:07:25.183 "params": { 00:07:25.183 "fsdev_io_pool_size": 65535, 00:07:25.183 "fsdev_io_cache_size": 256 00:07:25.183 } 00:07:25.183 } 00:07:25.183 ] 00:07:25.183 }, 00:07:25.183 { 00:07:25.183 "subsystem": "keyring", 00:07:25.183 "config": [] 00:07:25.183 }, 00:07:25.183 { 00:07:25.183 "subsystem": "iobuf", 00:07:25.183 "config": [ 00:07:25.183 { 00:07:25.183 "method": "iobuf_set_options", 00:07:25.183 "params": { 00:07:25.183 "small_pool_count": 8192, 00:07:25.183 "large_pool_count": 1024, 00:07:25.183 "small_bufsize": 8192, 00:07:25.183 "large_bufsize": 135168, 00:07:25.183 "enable_numa": false 00:07:25.183 } 00:07:25.183 } 00:07:25.183 ] 00:07:25.183 }, 00:07:25.183 { 00:07:25.183 "subsystem": "sock", 00:07:25.183 "config": [ 00:07:25.183 { 00:07:25.183 "method": "sock_set_default_impl", 00:07:25.183 "params": { 00:07:25.183 "impl_name": "posix" 00:07:25.183 } 00:07:25.183 }, 00:07:25.183 { 00:07:25.183 "method": "sock_impl_set_options", 00:07:25.183 "params": { 00:07:25.183 "impl_name": "ssl", 00:07:25.183 "recv_buf_size": 4096, 00:07:25.183 "send_buf_size": 4096, 00:07:25.183 "enable_recv_pipe": true, 00:07:25.183 "enable_quickack": false, 00:07:25.183 "enable_placement_id": 0, 00:07:25.183 "enable_zerocopy_send_server": true, 00:07:25.183 "enable_zerocopy_send_client": false, 00:07:25.183 "zerocopy_threshold": 0, 00:07:25.183 "tls_version": 0, 00:07:25.183 "enable_ktls": false 00:07:25.183 } 00:07:25.183 }, 00:07:25.183 { 00:07:25.183 "method": "sock_impl_set_options", 00:07:25.183 "params": { 00:07:25.183 "impl_name": "posix", 00:07:25.183 "recv_buf_size": 2097152, 00:07:25.183 "send_buf_size": 2097152, 00:07:25.183 "enable_recv_pipe": true, 00:07:25.183 "enable_quickack": false, 00:07:25.183 "enable_placement_id": 0, 00:07:25.183 "enable_zerocopy_send_server": true, 00:07:25.183 "enable_zerocopy_send_client": false, 00:07:25.183 "zerocopy_threshold": 0, 00:07:25.183 "tls_version": 0, 00:07:25.183 "enable_ktls": false 00:07:25.183 } 00:07:25.183 } 00:07:25.183 ] 00:07:25.183 }, 00:07:25.183 { 00:07:25.183 "subsystem": "vmd", 00:07:25.183 "config": [] 00:07:25.183 }, 00:07:25.183 { 00:07:25.183 "subsystem": "accel", 00:07:25.183 "config": [ 00:07:25.183 { 00:07:25.183 "method": "accel_set_options", 00:07:25.183 "params": { 00:07:25.183 "small_cache_size": 128, 00:07:25.183 "large_cache_size": 16, 00:07:25.183 "task_count": 2048, 00:07:25.183 "sequence_count": 2048, 00:07:25.183 "buf_count": 2048 00:07:25.183 } 00:07:25.183 } 00:07:25.183 ] 00:07:25.183 }, 00:07:25.183 { 00:07:25.183 "subsystem": "bdev", 00:07:25.183 "config": [ 00:07:25.183 { 00:07:25.183 "method": "bdev_set_options", 00:07:25.183 "params": { 00:07:25.183 "bdev_io_pool_size": 65535, 00:07:25.183 "bdev_io_cache_size": 256, 00:07:25.183 "bdev_auto_examine": true, 00:07:25.183 "iobuf_small_cache_size": 128, 00:07:25.183 "iobuf_large_cache_size": 16 00:07:25.183 } 00:07:25.183 }, 00:07:25.183 { 00:07:25.183 "method": "bdev_raid_set_options", 00:07:25.183 "params": { 00:07:25.183 "process_window_size_kb": 1024, 00:07:25.183 "process_max_bandwidth_mb_sec": 0 00:07:25.184 } 00:07:25.184 }, 00:07:25.184 { 00:07:25.184 "method": "bdev_iscsi_set_options", 00:07:25.184 "params": { 00:07:25.184 "timeout_sec": 30 00:07:25.184 } 00:07:25.184 }, 00:07:25.184 { 00:07:25.184 "method": "bdev_nvme_set_options", 00:07:25.184 "params": { 00:07:25.184 "action_on_timeout": "none", 00:07:25.184 "timeout_us": 0, 00:07:25.184 "timeout_admin_us": 0, 00:07:25.184 "keep_alive_timeout_ms": 10000, 00:07:25.184 "arbitration_burst": 0, 00:07:25.184 "low_priority_weight": 0, 00:07:25.184 "medium_priority_weight": 0, 00:07:25.184 "high_priority_weight": 0, 00:07:25.184 "nvme_adminq_poll_period_us": 10000, 00:07:25.184 "nvme_ioq_poll_period_us": 0, 00:07:25.184 "io_queue_requests": 0, 00:07:25.184 "delay_cmd_submit": true, 00:07:25.184 "transport_retry_count": 4, 00:07:25.184 "bdev_retry_count": 3, 00:07:25.184 "transport_ack_timeout": 0, 00:07:25.184 "ctrlr_loss_timeout_sec": 0, 00:07:25.184 "reconnect_delay_sec": 0, 00:07:25.184 "fast_io_fail_timeout_sec": 0, 00:07:25.184 "disable_auto_failback": false, 00:07:25.184 "generate_uuids": false, 00:07:25.184 "transport_tos": 0, 00:07:25.184 "nvme_error_stat": false, 00:07:25.184 "rdma_srq_size": 0, 00:07:25.184 "io_path_stat": false, 00:07:25.184 "allow_accel_sequence": false, 00:07:25.184 "rdma_max_cq_size": 0, 00:07:25.184 "rdma_cm_event_timeout_ms": 0, 00:07:25.184 "dhchap_digests": [ 00:07:25.184 "sha256", 00:07:25.184 "sha384", 00:07:25.184 "sha512" 00:07:25.184 ], 00:07:25.184 "dhchap_dhgroups": [ 00:07:25.184 "null", 00:07:25.184 "ffdhe2048", 00:07:25.184 "ffdhe3072", 00:07:25.184 "ffdhe4096", 00:07:25.184 "ffdhe6144", 00:07:25.184 "ffdhe8192" 00:07:25.184 ] 00:07:25.184 } 00:07:25.184 }, 00:07:25.184 { 00:07:25.184 "method": "bdev_nvme_set_hotplug", 00:07:25.184 "params": { 00:07:25.184 "period_us": 100000, 00:07:25.184 "enable": false 00:07:25.184 } 00:07:25.184 }, 00:07:25.184 { 00:07:25.184 "method": "bdev_wait_for_examine" 00:07:25.184 } 00:07:25.184 ] 00:07:25.184 }, 00:07:25.184 { 00:07:25.184 "subsystem": "scsi", 00:07:25.184 "config": null 00:07:25.184 }, 00:07:25.184 { 00:07:25.184 "subsystem": "scheduler", 00:07:25.184 "config": [ 00:07:25.184 { 00:07:25.184 "method": "framework_set_scheduler", 00:07:25.184 "params": { 00:07:25.184 "name": "static" 00:07:25.184 } 00:07:25.184 } 00:07:25.184 ] 00:07:25.184 }, 00:07:25.184 { 00:07:25.184 "subsystem": "vhost_scsi", 00:07:25.184 "config": [] 00:07:25.184 }, 00:07:25.184 { 00:07:25.184 "subsystem": "vhost_blk", 00:07:25.184 "config": [] 00:07:25.184 }, 00:07:25.184 { 00:07:25.184 "subsystem": "ublk", 00:07:25.184 "config": [] 00:07:25.184 }, 00:07:25.184 { 00:07:25.184 "subsystem": "nbd", 00:07:25.184 "config": [] 00:07:25.184 }, 00:07:25.184 { 00:07:25.184 "subsystem": "nvmf", 00:07:25.184 "config": [ 00:07:25.184 { 00:07:25.184 "method": "nvmf_set_config", 00:07:25.184 "params": { 00:07:25.184 "discovery_filter": "match_any", 00:07:25.184 "admin_cmd_passthru": { 00:07:25.184 "identify_ctrlr": false 00:07:25.184 }, 00:07:25.184 "dhchap_digests": [ 00:07:25.184 "sha256", 00:07:25.184 "sha384", 00:07:25.184 "sha512" 00:07:25.184 ], 00:07:25.184 "dhchap_dhgroups": [ 00:07:25.184 "null", 00:07:25.184 "ffdhe2048", 00:07:25.184 "ffdhe3072", 00:07:25.184 "ffdhe4096", 00:07:25.184 "ffdhe6144", 00:07:25.184 "ffdhe8192" 00:07:25.184 ] 00:07:25.184 } 00:07:25.184 }, 00:07:25.184 { 00:07:25.184 "method": "nvmf_set_max_subsystems", 00:07:25.184 "params": { 00:07:25.184 "max_subsystems": 1024 00:07:25.184 } 00:07:25.184 }, 00:07:25.184 { 00:07:25.184 "method": "nvmf_set_crdt", 00:07:25.184 "params": { 00:07:25.184 "crdt1": 0, 00:07:25.184 "crdt2": 0, 00:07:25.184 "crdt3": 0 00:07:25.184 } 00:07:25.184 }, 00:07:25.184 { 00:07:25.184 "method": "nvmf_create_transport", 00:07:25.184 "params": { 00:07:25.184 "trtype": "TCP", 00:07:25.184 "max_queue_depth": 128, 00:07:25.184 "max_io_qpairs_per_ctrlr": 127, 00:07:25.184 "in_capsule_data_size": 4096, 00:07:25.184 "max_io_size": 131072, 00:07:25.184 "io_unit_size": 131072, 00:07:25.184 "max_aq_depth": 128, 00:07:25.184 "num_shared_buffers": 511, 00:07:25.184 "buf_cache_size": 4294967295, 00:07:25.184 "dif_insert_or_strip": false, 00:07:25.184 "zcopy": false, 00:07:25.184 "c2h_success": true, 00:07:25.184 "sock_priority": 0, 00:07:25.184 "abort_timeout_sec": 1, 00:07:25.184 "ack_timeout": 0, 00:07:25.184 "data_wr_pool_size": 0 00:07:25.184 } 00:07:25.184 } 00:07:25.184 ] 00:07:25.184 }, 00:07:25.184 { 00:07:25.184 "subsystem": "iscsi", 00:07:25.184 "config": [ 00:07:25.184 { 00:07:25.184 "method": "iscsi_set_options", 00:07:25.184 "params": { 00:07:25.184 "node_base": "iqn.2016-06.io.spdk", 00:07:25.184 "max_sessions": 128, 00:07:25.184 "max_connections_per_session": 2, 00:07:25.184 "max_queue_depth": 64, 00:07:25.184 "default_time2wait": 2, 00:07:25.184 "default_time2retain": 20, 00:07:25.184 "first_burst_length": 8192, 00:07:25.184 "immediate_data": true, 00:07:25.184 "allow_duplicated_isid": false, 00:07:25.184 "error_recovery_level": 0, 00:07:25.184 "nop_timeout": 60, 00:07:25.184 "nop_in_interval": 30, 00:07:25.184 "disable_chap": false, 00:07:25.184 "require_chap": false, 00:07:25.184 "mutual_chap": false, 00:07:25.184 "chap_group": 0, 00:07:25.184 "max_large_datain_per_connection": 64, 00:07:25.184 "max_r2t_per_connection": 4, 00:07:25.184 "pdu_pool_size": 36864, 00:07:25.184 "immediate_data_pool_size": 16384, 00:07:25.184 "data_out_pool_size": 2048 00:07:25.184 } 00:07:25.184 } 00:07:25.184 ] 00:07:25.184 } 00:07:25.184 ] 00:07:25.184 } 00:07:25.184 18:05:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:25.184 18:05:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57234 00:07:25.184 18:05:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57234 ']' 00:07:25.184 18:05:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57234 00:07:25.184 18:05:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:07:25.184 18:05:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:25.184 18:05:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57234 00:07:25.184 18:05:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:25.184 killing process with pid 57234 00:07:25.184 18:05:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:25.184 18:05:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57234' 00:07:25.184 18:05:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57234 00:07:25.184 18:05:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57234 00:07:27.719 18:05:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57284 00:07:27.719 18:05:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:27.719 18:05:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:33.000 18:05:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57284 00:07:33.000 18:05:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57284 ']' 00:07:33.000 18:05:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57284 00:07:33.000 18:05:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:07:33.000 18:05:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:33.000 18:05:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57284 00:07:33.000 killing process with pid 57284 00:07:33.000 18:05:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:33.000 18:05:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:33.000 18:05:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57284' 00:07:33.000 18:05:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57284 00:07:33.000 18:05:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57284 00:07:34.939 18:06:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:34.939 18:06:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:34.939 ************************************ 00:07:34.939 END TEST skip_rpc_with_json 00:07:34.939 ************************************ 00:07:34.939 00:07:34.939 real 0m11.067s 00:07:34.939 user 0m10.411s 00:07:34.939 sys 0m0.991s 00:07:34.939 18:06:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.939 18:06:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:34.939 18:06:00 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:34.939 18:06:00 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:34.939 18:06:00 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.939 18:06:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.939 ************************************ 00:07:34.939 START TEST skip_rpc_with_delay 00:07:34.939 ************************************ 00:07:34.939 18:06:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:07:34.939 18:06:00 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:34.939 18:06:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:07:34.939 18:06:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:34.940 18:06:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:34.940 18:06:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:34.940 18:06:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:34.940 18:06:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:34.940 18:06:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:34.940 18:06:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:34.940 18:06:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:34.940 18:06:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:34.940 18:06:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:34.940 [2024-12-06 18:06:00.338707] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:34.940 ************************************ 00:07:34.940 END TEST skip_rpc_with_delay 00:07:34.940 ************************************ 00:07:34.940 18:06:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:07:34.940 18:06:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:34.940 18:06:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:34.940 18:06:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:34.940 00:07:34.940 real 0m0.222s 00:07:34.940 user 0m0.115s 00:07:34.940 sys 0m0.103s 00:07:34.940 18:06:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.940 18:06:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:34.940 18:06:00 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:35.198 18:06:00 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:35.198 18:06:00 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:35.198 18:06:00 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:35.198 18:06:00 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.198 18:06:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.198 ************************************ 00:07:35.198 START TEST exit_on_failed_rpc_init 00:07:35.198 ************************************ 00:07:35.198 18:06:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:07:35.198 18:06:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57418 00:07:35.198 18:06:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57418 00:07:35.198 18:06:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:35.198 18:06:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57418 ']' 00:07:35.198 18:06:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.198 18:06:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:35.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.198 18:06:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.198 18:06:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:35.198 18:06:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:35.198 [2024-12-06 18:06:00.593959] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:07:35.198 [2024-12-06 18:06:00.594140] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57418 ] 00:07:35.457 [2024-12-06 18:06:00.770748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.457 [2024-12-06 18:06:00.906241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.393 18:06:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:36.393 18:06:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:07:36.393 18:06:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:36.393 18:06:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:36.393 18:06:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:07:36.393 18:06:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:36.393 18:06:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:36.393 18:06:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:36.393 18:06:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:36.393 18:06:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:36.393 18:06:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:36.393 18:06:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:36.393 18:06:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:36.393 18:06:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:36.393 18:06:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:36.652 [2024-12-06 18:06:01.946085] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:07:36.652 [2024-12-06 18:06:01.946598] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57436 ] 00:07:36.652 [2024-12-06 18:06:02.133593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.912 [2024-12-06 18:06:02.310950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.912 [2024-12-06 18:06:02.311109] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:36.912 [2024-12-06 18:06:02.311157] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:36.912 [2024-12-06 18:06:02.311178] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:37.171 18:06:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:07:37.171 18:06:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:37.171 18:06:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:07:37.171 18:06:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:07:37.171 18:06:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:07:37.171 18:06:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:37.171 18:06:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:37.171 18:06:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57418 00:07:37.171 18:06:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57418 ']' 00:07:37.171 18:06:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57418 00:07:37.171 18:06:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:07:37.171 18:06:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:37.171 18:06:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57418 00:07:37.171 killing process with pid 57418 00:07:37.171 18:06:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:37.171 18:06:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:37.171 18:06:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57418' 00:07:37.171 18:06:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57418 00:07:37.171 18:06:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57418 00:07:39.703 00:07:39.703 real 0m4.501s 00:07:39.703 user 0m5.040s 00:07:39.703 sys 0m0.719s 00:07:39.703 18:06:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.703 ************************************ 00:07:39.703 END TEST exit_on_failed_rpc_init 00:07:39.703 ************************************ 00:07:39.703 18:06:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:39.703 18:06:05 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:39.703 00:07:39.704 real 0m23.462s 00:07:39.704 user 0m22.451s 00:07:39.704 sys 0m2.485s 00:07:39.704 18:06:05 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.704 ************************************ 00:07:39.704 END TEST skip_rpc 00:07:39.704 ************************************ 00:07:39.704 18:06:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.704 18:06:05 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:39.704 18:06:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:39.704 18:06:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.704 18:06:05 -- common/autotest_common.sh@10 -- # set +x 00:07:39.704 ************************************ 00:07:39.704 START TEST rpc_client 00:07:39.704 ************************************ 00:07:39.704 18:06:05 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:39.704 * Looking for test storage... 00:07:39.704 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:07:39.704 18:06:05 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:39.704 18:06:05 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:07:39.704 18:06:05 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:39.963 18:06:05 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:39.963 18:06:05 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:39.963 18:06:05 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:39.963 18:06:05 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:39.963 18:06:05 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:07:39.963 18:06:05 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:07:39.963 18:06:05 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:07:39.963 18:06:05 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:07:39.963 18:06:05 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:07:39.963 18:06:05 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:07:39.963 18:06:05 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:07:39.963 18:06:05 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:39.963 18:06:05 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:07:39.963 18:06:05 rpc_client -- scripts/common.sh@345 -- # : 1 00:07:39.963 18:06:05 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:39.963 18:06:05 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:39.963 18:06:05 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:07:39.963 18:06:05 rpc_client -- scripts/common.sh@353 -- # local d=1 00:07:39.963 18:06:05 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:39.963 18:06:05 rpc_client -- scripts/common.sh@355 -- # echo 1 00:07:39.963 18:06:05 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:07:39.963 18:06:05 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:07:39.963 18:06:05 rpc_client -- scripts/common.sh@353 -- # local d=2 00:07:39.963 18:06:05 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:39.963 18:06:05 rpc_client -- scripts/common.sh@355 -- # echo 2 00:07:39.963 18:06:05 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:07:39.963 18:06:05 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:39.963 18:06:05 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:39.963 18:06:05 rpc_client -- scripts/common.sh@368 -- # return 0 00:07:39.963 18:06:05 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:39.963 18:06:05 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:39.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.963 --rc genhtml_branch_coverage=1 00:07:39.963 --rc genhtml_function_coverage=1 00:07:39.963 --rc genhtml_legend=1 00:07:39.963 --rc geninfo_all_blocks=1 00:07:39.963 --rc geninfo_unexecuted_blocks=1 00:07:39.963 00:07:39.963 ' 00:07:39.963 18:06:05 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:39.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.963 --rc genhtml_branch_coverage=1 00:07:39.963 --rc genhtml_function_coverage=1 00:07:39.963 --rc genhtml_legend=1 00:07:39.963 --rc geninfo_all_blocks=1 00:07:39.963 --rc geninfo_unexecuted_blocks=1 00:07:39.963 00:07:39.963 ' 00:07:39.963 18:06:05 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:39.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.963 --rc genhtml_branch_coverage=1 00:07:39.963 --rc genhtml_function_coverage=1 00:07:39.963 --rc genhtml_legend=1 00:07:39.963 --rc geninfo_all_blocks=1 00:07:39.963 --rc geninfo_unexecuted_blocks=1 00:07:39.963 00:07:39.963 ' 00:07:39.963 18:06:05 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:39.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.963 --rc genhtml_branch_coverage=1 00:07:39.963 --rc genhtml_function_coverage=1 00:07:39.963 --rc genhtml_legend=1 00:07:39.963 --rc geninfo_all_blocks=1 00:07:39.963 --rc geninfo_unexecuted_blocks=1 00:07:39.963 00:07:39.963 ' 00:07:39.963 18:06:05 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:07:39.963 OK 00:07:39.963 18:06:05 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:39.963 00:07:39.963 real 0m0.233s 00:07:39.963 user 0m0.127s 00:07:39.963 sys 0m0.111s 00:07:39.963 18:06:05 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.963 ************************************ 00:07:39.963 END TEST rpc_client 00:07:39.963 ************************************ 00:07:39.963 18:06:05 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:39.963 18:06:05 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:39.963 18:06:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:39.963 18:06:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.963 18:06:05 -- common/autotest_common.sh@10 -- # set +x 00:07:39.963 ************************************ 00:07:39.963 START TEST json_config 00:07:39.963 ************************************ 00:07:39.963 18:06:05 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:39.963 18:06:05 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:39.963 18:06:05 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:07:39.963 18:06:05 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:39.963 18:06:05 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:39.963 18:06:05 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:39.963 18:06:05 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:39.963 18:06:05 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:39.963 18:06:05 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:07:39.963 18:06:05 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:07:39.963 18:06:05 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:07:39.963 18:06:05 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:07:39.963 18:06:05 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:07:39.963 18:06:05 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:07:39.963 18:06:05 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:07:39.963 18:06:05 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:39.963 18:06:05 json_config -- scripts/common.sh@344 -- # case "$op" in 00:07:39.963 18:06:05 json_config -- scripts/common.sh@345 -- # : 1 00:07:39.963 18:06:05 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:39.963 18:06:05 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:39.963 18:06:05 json_config -- scripts/common.sh@365 -- # decimal 1 00:07:40.223 18:06:05 json_config -- scripts/common.sh@353 -- # local d=1 00:07:40.223 18:06:05 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:40.223 18:06:05 json_config -- scripts/common.sh@355 -- # echo 1 00:07:40.223 18:06:05 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:07:40.223 18:06:05 json_config -- scripts/common.sh@366 -- # decimal 2 00:07:40.223 18:06:05 json_config -- scripts/common.sh@353 -- # local d=2 00:07:40.223 18:06:05 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:40.223 18:06:05 json_config -- scripts/common.sh@355 -- # echo 2 00:07:40.223 18:06:05 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:07:40.223 18:06:05 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:40.223 18:06:05 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:40.223 18:06:05 json_config -- scripts/common.sh@368 -- # return 0 00:07:40.223 18:06:05 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:40.223 18:06:05 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:40.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.223 --rc genhtml_branch_coverage=1 00:07:40.223 --rc genhtml_function_coverage=1 00:07:40.223 --rc genhtml_legend=1 00:07:40.223 --rc geninfo_all_blocks=1 00:07:40.223 --rc geninfo_unexecuted_blocks=1 00:07:40.223 00:07:40.223 ' 00:07:40.223 18:06:05 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:40.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.223 --rc genhtml_branch_coverage=1 00:07:40.223 --rc genhtml_function_coverage=1 00:07:40.223 --rc genhtml_legend=1 00:07:40.223 --rc geninfo_all_blocks=1 00:07:40.223 --rc geninfo_unexecuted_blocks=1 00:07:40.223 00:07:40.223 ' 00:07:40.223 18:06:05 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:40.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.223 --rc genhtml_branch_coverage=1 00:07:40.223 --rc genhtml_function_coverage=1 00:07:40.223 --rc genhtml_legend=1 00:07:40.223 --rc geninfo_all_blocks=1 00:07:40.223 --rc geninfo_unexecuted_blocks=1 00:07:40.223 00:07:40.223 ' 00:07:40.223 18:06:05 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:40.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.223 --rc genhtml_branch_coverage=1 00:07:40.223 --rc genhtml_function_coverage=1 00:07:40.223 --rc genhtml_legend=1 00:07:40.223 --rc geninfo_all_blocks=1 00:07:40.223 --rc geninfo_unexecuted_blocks=1 00:07:40.223 00:07:40.223 ' 00:07:40.224 18:06:05 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:40.224 18:06:05 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:40.224 18:06:05 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:40.224 18:06:05 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:40.224 18:06:05 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:40.224 18:06:05 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:40.224 18:06:05 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:40.224 18:06:05 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:40.224 18:06:05 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:40.224 18:06:05 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:40.224 18:06:05 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:40.224 18:06:05 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:40.224 18:06:05 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e80ab19-9c15-4076-89d4-bbd3dd84ce33 00:07:40.224 18:06:05 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=2e80ab19-9c15-4076-89d4-bbd3dd84ce33 00:07:40.224 18:06:05 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:40.224 18:06:05 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:40.224 18:06:05 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:40.224 18:06:05 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:40.224 18:06:05 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:40.224 18:06:05 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:07:40.224 18:06:05 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:40.224 18:06:05 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.224 18:06:05 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.224 18:06:05 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.224 18:06:05 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.224 18:06:05 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.224 18:06:05 json_config -- paths/export.sh@5 -- # export PATH 00:07:40.224 18:06:05 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.224 18:06:05 json_config -- nvmf/common.sh@51 -- # : 0 00:07:40.224 18:06:05 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:40.224 18:06:05 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:40.224 18:06:05 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:40.224 18:06:05 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:40.224 18:06:05 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:40.224 18:06:05 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:40.224 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:40.224 18:06:05 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:40.224 18:06:05 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:40.224 18:06:05 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:40.224 18:06:05 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:40.224 18:06:05 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:40.224 18:06:05 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:40.224 18:06:05 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:40.224 18:06:05 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:40.224 WARNING: No tests are enabled so not running JSON configuration tests 00:07:40.224 18:06:05 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:07:40.224 18:06:05 json_config -- json_config/json_config.sh@28 -- # exit 0 00:07:40.224 00:07:40.224 real 0m0.167s 00:07:40.224 user 0m0.107s 00:07:40.224 sys 0m0.066s 00:07:40.224 18:06:05 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.224 18:06:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:40.224 ************************************ 00:07:40.224 END TEST json_config 00:07:40.224 ************************************ 00:07:40.224 18:06:05 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:40.224 18:06:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:40.224 18:06:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.224 18:06:05 -- common/autotest_common.sh@10 -- # set +x 00:07:40.224 ************************************ 00:07:40.224 START TEST json_config_extra_key 00:07:40.224 ************************************ 00:07:40.224 18:06:05 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:40.224 18:06:05 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:40.224 18:06:05 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:07:40.224 18:06:05 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:40.224 18:06:05 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:40.224 18:06:05 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:40.224 18:06:05 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:40.224 18:06:05 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:40.224 18:06:05 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:07:40.224 18:06:05 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:07:40.224 18:06:05 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:07:40.224 18:06:05 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:07:40.224 18:06:05 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:07:40.224 18:06:05 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:07:40.224 18:06:05 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:07:40.224 18:06:05 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:40.224 18:06:05 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:07:40.224 18:06:05 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:07:40.224 18:06:05 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:40.224 18:06:05 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:40.224 18:06:05 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:07:40.224 18:06:05 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:07:40.224 18:06:05 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:40.224 18:06:05 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:07:40.224 18:06:05 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:07:40.224 18:06:05 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:07:40.224 18:06:05 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:07:40.224 18:06:05 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:40.224 18:06:05 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:07:40.224 18:06:05 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:07:40.224 18:06:05 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:40.224 18:06:05 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:40.224 18:06:05 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:07:40.224 18:06:05 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:40.224 18:06:05 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:40.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.224 --rc genhtml_branch_coverage=1 00:07:40.224 --rc genhtml_function_coverage=1 00:07:40.224 --rc genhtml_legend=1 00:07:40.224 --rc geninfo_all_blocks=1 00:07:40.224 --rc geninfo_unexecuted_blocks=1 00:07:40.224 00:07:40.224 ' 00:07:40.224 18:06:05 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:40.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.224 --rc genhtml_branch_coverage=1 00:07:40.224 --rc genhtml_function_coverage=1 00:07:40.224 --rc genhtml_legend=1 00:07:40.224 --rc geninfo_all_blocks=1 00:07:40.224 --rc geninfo_unexecuted_blocks=1 00:07:40.224 00:07:40.224 ' 00:07:40.224 18:06:05 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:40.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.224 --rc genhtml_branch_coverage=1 00:07:40.224 --rc genhtml_function_coverage=1 00:07:40.224 --rc genhtml_legend=1 00:07:40.224 --rc geninfo_all_blocks=1 00:07:40.224 --rc geninfo_unexecuted_blocks=1 00:07:40.224 00:07:40.224 ' 00:07:40.224 18:06:05 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:40.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.224 --rc genhtml_branch_coverage=1 00:07:40.224 --rc genhtml_function_coverage=1 00:07:40.224 --rc genhtml_legend=1 00:07:40.224 --rc geninfo_all_blocks=1 00:07:40.224 --rc geninfo_unexecuted_blocks=1 00:07:40.224 00:07:40.224 ' 00:07:40.224 18:06:05 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:40.224 18:06:05 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:40.224 18:06:05 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:40.224 18:06:05 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:40.224 18:06:05 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:40.225 18:06:05 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:40.225 18:06:05 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:40.225 18:06:05 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:40.225 18:06:05 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:40.225 18:06:05 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:40.225 18:06:05 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:40.225 18:06:05 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:40.484 18:06:05 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e80ab19-9c15-4076-89d4-bbd3dd84ce33 00:07:40.484 18:06:05 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=2e80ab19-9c15-4076-89d4-bbd3dd84ce33 00:07:40.484 18:06:05 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:40.484 18:06:05 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:40.484 18:06:05 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:40.484 18:06:05 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:40.484 18:06:05 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:40.484 18:06:05 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:07:40.484 18:06:05 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:40.484 18:06:05 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.484 18:06:05 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.484 18:06:05 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.484 18:06:05 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.484 18:06:05 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.484 18:06:05 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:40.484 18:06:05 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.484 18:06:05 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:07:40.484 18:06:05 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:40.484 18:06:05 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:40.484 18:06:05 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:40.484 18:06:05 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:40.484 18:06:05 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:40.484 18:06:05 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:40.484 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:40.484 18:06:05 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:40.484 18:06:05 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:40.484 18:06:05 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:40.484 18:06:05 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:40.484 18:06:05 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:40.484 18:06:05 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:40.484 18:06:05 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:40.484 18:06:05 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:40.484 18:06:05 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:40.484 18:06:05 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:40.484 18:06:05 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:07:40.484 18:06:05 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:40.484 18:06:05 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:40.484 18:06:05 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:40.484 INFO: launching applications... 00:07:40.484 18:06:05 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:40.484 18:06:05 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:40.484 18:06:05 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:40.484 18:06:05 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:40.484 18:06:05 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:40.484 18:06:05 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:40.484 18:06:05 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:40.484 18:06:05 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:40.484 18:06:05 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57646 00:07:40.484 18:06:05 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:40.484 Waiting for target to run... 00:07:40.484 18:06:05 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:40.484 18:06:05 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57646 /var/tmp/spdk_tgt.sock 00:07:40.484 18:06:05 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57646 ']' 00:07:40.484 18:06:05 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:40.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:40.484 18:06:05 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:40.484 18:06:05 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:40.484 18:06:05 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:40.484 18:06:05 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:40.484 [2024-12-06 18:06:05.874438] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:07:40.484 [2024-12-06 18:06:05.874631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57646 ] 00:07:41.052 [2024-12-06 18:06:06.354825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.052 [2024-12-06 18:06:06.470459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.988 18:06:07 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:41.988 18:06:07 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:07:41.988 00:07:41.988 18:06:07 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:41.988 INFO: shutting down applications... 00:07:41.988 18:06:07 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:41.988 18:06:07 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:41.988 18:06:07 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:41.988 18:06:07 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:41.988 18:06:07 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57646 ]] 00:07:41.988 18:06:07 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57646 00:07:41.988 18:06:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:41.988 18:06:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:41.988 18:06:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57646 00:07:41.988 18:06:07 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:42.247 18:06:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:42.247 18:06:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:42.247 18:06:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57646 00:07:42.247 18:06:07 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:42.814 18:06:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:42.814 18:06:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:42.814 18:06:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57646 00:07:42.814 18:06:08 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:43.380 18:06:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:43.380 18:06:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:43.380 18:06:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57646 00:07:43.380 18:06:08 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:43.947 18:06:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:43.947 18:06:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:43.947 18:06:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57646 00:07:43.947 18:06:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:44.204 18:06:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:44.204 18:06:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:44.204 18:06:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57646 00:07:44.204 18:06:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:44.770 18:06:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:44.770 18:06:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:44.770 18:06:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57646 00:07:44.770 18:06:10 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:44.770 18:06:10 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:44.770 18:06:10 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:44.770 SPDK target shutdown done 00:07:44.770 18:06:10 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:44.770 Success 00:07:44.770 18:06:10 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:44.770 00:07:44.770 real 0m4.607s 00:07:44.770 user 0m3.969s 00:07:44.770 sys 0m0.649s 00:07:44.770 18:06:10 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.770 18:06:10 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:44.770 ************************************ 00:07:44.770 END TEST json_config_extra_key 00:07:44.770 ************************************ 00:07:44.770 18:06:10 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:44.770 18:06:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:44.770 18:06:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.770 18:06:10 -- common/autotest_common.sh@10 -- # set +x 00:07:44.770 ************************************ 00:07:44.770 START TEST alias_rpc 00:07:44.770 ************************************ 00:07:44.770 18:06:10 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:45.028 * Looking for test storage... 00:07:45.028 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:07:45.028 18:06:10 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:45.028 18:06:10 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:07:45.028 18:06:10 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:45.028 18:06:10 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:45.028 18:06:10 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:45.028 18:06:10 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:45.028 18:06:10 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:45.028 18:06:10 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:45.028 18:06:10 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:45.028 18:06:10 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:45.028 18:06:10 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:45.028 18:06:10 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:45.028 18:06:10 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:45.028 18:06:10 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:45.028 18:06:10 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:45.028 18:06:10 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:45.028 18:06:10 alias_rpc -- scripts/common.sh@345 -- # : 1 00:07:45.028 18:06:10 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:45.028 18:06:10 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:45.029 18:06:10 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:45.029 18:06:10 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:07:45.029 18:06:10 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:45.029 18:06:10 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:07:45.029 18:06:10 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:45.029 18:06:10 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:45.029 18:06:10 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:07:45.029 18:06:10 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:45.029 18:06:10 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:07:45.029 18:06:10 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:45.029 18:06:10 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:45.029 18:06:10 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:45.029 18:06:10 alias_rpc -- scripts/common.sh@368 -- # return 0 00:07:45.029 18:06:10 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:45.029 18:06:10 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:45.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.029 --rc genhtml_branch_coverage=1 00:07:45.029 --rc genhtml_function_coverage=1 00:07:45.029 --rc genhtml_legend=1 00:07:45.029 --rc geninfo_all_blocks=1 00:07:45.029 --rc geninfo_unexecuted_blocks=1 00:07:45.029 00:07:45.029 ' 00:07:45.029 18:06:10 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:45.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.029 --rc genhtml_branch_coverage=1 00:07:45.029 --rc genhtml_function_coverage=1 00:07:45.029 --rc genhtml_legend=1 00:07:45.029 --rc geninfo_all_blocks=1 00:07:45.029 --rc geninfo_unexecuted_blocks=1 00:07:45.029 00:07:45.029 ' 00:07:45.029 18:06:10 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:45.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.029 --rc genhtml_branch_coverage=1 00:07:45.029 --rc genhtml_function_coverage=1 00:07:45.029 --rc genhtml_legend=1 00:07:45.029 --rc geninfo_all_blocks=1 00:07:45.029 --rc geninfo_unexecuted_blocks=1 00:07:45.029 00:07:45.029 ' 00:07:45.029 18:06:10 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:45.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.029 --rc genhtml_branch_coverage=1 00:07:45.029 --rc genhtml_function_coverage=1 00:07:45.029 --rc genhtml_legend=1 00:07:45.029 --rc geninfo_all_blocks=1 00:07:45.029 --rc geninfo_unexecuted_blocks=1 00:07:45.029 00:07:45.029 ' 00:07:45.029 18:06:10 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:45.029 18:06:10 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57763 00:07:45.029 18:06:10 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:45.029 18:06:10 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57763 00:07:45.029 18:06:10 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57763 ']' 00:07:45.029 18:06:10 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.029 18:06:10 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:45.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.029 18:06:10 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.029 18:06:10 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:45.029 18:06:10 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.287 [2024-12-06 18:06:10.553200] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:07:45.287 [2024-12-06 18:06:10.553395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57763 ] 00:07:45.287 [2024-12-06 18:06:10.735466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.545 [2024-12-06 18:06:10.864278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.480 18:06:11 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:46.480 18:06:11 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:46.480 18:06:11 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:07:46.737 18:06:12 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57763 00:07:46.737 18:06:12 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57763 ']' 00:07:46.737 18:06:12 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57763 00:07:46.737 18:06:12 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:07:46.737 18:06:12 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:46.737 18:06:12 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57763 00:07:46.737 18:06:12 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:46.737 killing process with pid 57763 00:07:46.737 18:06:12 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:46.737 18:06:12 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57763' 00:07:46.737 18:06:12 alias_rpc -- common/autotest_common.sh@973 -- # kill 57763 00:07:46.737 18:06:12 alias_rpc -- common/autotest_common.sh@978 -- # wait 57763 00:07:49.310 00:07:49.310 real 0m4.080s 00:07:49.310 user 0m4.134s 00:07:49.310 sys 0m0.635s 00:07:49.310 18:06:14 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.310 18:06:14 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:49.310 ************************************ 00:07:49.310 END TEST alias_rpc 00:07:49.310 ************************************ 00:07:49.310 18:06:14 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:07:49.310 18:06:14 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:49.310 18:06:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:49.310 18:06:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.310 18:06:14 -- common/autotest_common.sh@10 -- # set +x 00:07:49.310 ************************************ 00:07:49.310 START TEST spdkcli_tcp 00:07:49.310 ************************************ 00:07:49.310 18:06:14 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:49.310 * Looking for test storage... 00:07:49.310 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:07:49.310 18:06:14 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:49.310 18:06:14 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:07:49.310 18:06:14 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:49.310 18:06:14 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:49.310 18:06:14 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:49.310 18:06:14 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:49.310 18:06:14 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:49.310 18:06:14 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:49.310 18:06:14 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:49.310 18:06:14 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:49.310 18:06:14 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:49.310 18:06:14 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:49.310 18:06:14 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:49.310 18:06:14 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:49.310 18:06:14 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:49.310 18:06:14 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:49.310 18:06:14 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:07:49.310 18:06:14 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:49.310 18:06:14 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:49.310 18:06:14 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:49.310 18:06:14 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:07:49.310 18:06:14 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:49.310 18:06:14 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:07:49.310 18:06:14 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:49.310 18:06:14 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:49.310 18:06:14 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:07:49.310 18:06:14 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.310 18:06:14 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:07:49.310 18:06:14 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:49.311 18:06:14 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:49.311 18:06:14 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:49.311 18:06:14 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:07:49.311 18:06:14 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.311 18:06:14 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:49.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.311 --rc genhtml_branch_coverage=1 00:07:49.311 --rc genhtml_function_coverage=1 00:07:49.311 --rc genhtml_legend=1 00:07:49.311 --rc geninfo_all_blocks=1 00:07:49.311 --rc geninfo_unexecuted_blocks=1 00:07:49.311 00:07:49.311 ' 00:07:49.311 18:06:14 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:49.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.311 --rc genhtml_branch_coverage=1 00:07:49.311 --rc genhtml_function_coverage=1 00:07:49.311 --rc genhtml_legend=1 00:07:49.311 --rc geninfo_all_blocks=1 00:07:49.311 --rc geninfo_unexecuted_blocks=1 00:07:49.311 00:07:49.311 ' 00:07:49.311 18:06:14 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:49.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.311 --rc genhtml_branch_coverage=1 00:07:49.311 --rc genhtml_function_coverage=1 00:07:49.311 --rc genhtml_legend=1 00:07:49.311 --rc geninfo_all_blocks=1 00:07:49.311 --rc geninfo_unexecuted_blocks=1 00:07:49.311 00:07:49.311 ' 00:07:49.311 18:06:14 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:49.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.311 --rc genhtml_branch_coverage=1 00:07:49.311 --rc genhtml_function_coverage=1 00:07:49.311 --rc genhtml_legend=1 00:07:49.311 --rc geninfo_all_blocks=1 00:07:49.311 --rc geninfo_unexecuted_blocks=1 00:07:49.311 00:07:49.311 ' 00:07:49.311 18:06:14 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:07:49.311 18:06:14 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:07:49.311 18:06:14 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:07:49.311 18:06:14 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:49.311 18:06:14 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:49.311 18:06:14 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:49.311 18:06:14 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:49.311 18:06:14 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:49.311 18:06:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:49.311 18:06:14 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57869 00:07:49.311 18:06:14 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57869 00:07:49.311 18:06:14 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57869 ']' 00:07:49.311 18:06:14 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.311 18:06:14 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:49.311 18:06:14 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:49.311 18:06:14 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.311 18:06:14 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:49.311 18:06:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:49.311 [2024-12-06 18:06:14.684321] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:07:49.311 [2024-12-06 18:06:14.684507] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57869 ] 00:07:49.570 [2024-12-06 18:06:14.867643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:49.570 [2024-12-06 18:06:15.001720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.570 [2024-12-06 18:06:15.001732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.526 18:06:15 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:50.526 18:06:15 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:07:50.526 18:06:15 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:50.526 18:06:15 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57887 00:07:50.526 18:06:15 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:50.784 [ 00:07:50.784 "bdev_malloc_delete", 00:07:50.784 "bdev_malloc_create", 00:07:50.784 "bdev_null_resize", 00:07:50.784 "bdev_null_delete", 00:07:50.784 "bdev_null_create", 00:07:50.784 "bdev_nvme_cuse_unregister", 00:07:50.784 "bdev_nvme_cuse_register", 00:07:50.784 "bdev_opal_new_user", 00:07:50.784 "bdev_opal_set_lock_state", 00:07:50.784 "bdev_opal_delete", 00:07:50.784 "bdev_opal_get_info", 00:07:50.784 "bdev_opal_create", 00:07:50.784 "bdev_nvme_opal_revert", 00:07:50.784 "bdev_nvme_opal_init", 00:07:50.784 "bdev_nvme_send_cmd", 00:07:50.785 "bdev_nvme_set_keys", 00:07:50.785 "bdev_nvme_get_path_iostat", 00:07:50.785 "bdev_nvme_get_mdns_discovery_info", 00:07:50.785 "bdev_nvme_stop_mdns_discovery", 00:07:50.785 "bdev_nvme_start_mdns_discovery", 00:07:50.785 "bdev_nvme_set_multipath_policy", 00:07:50.785 "bdev_nvme_set_preferred_path", 00:07:50.785 "bdev_nvme_get_io_paths", 00:07:50.785 "bdev_nvme_remove_error_injection", 00:07:50.785 "bdev_nvme_add_error_injection", 00:07:50.785 "bdev_nvme_get_discovery_info", 00:07:50.785 "bdev_nvme_stop_discovery", 00:07:50.785 "bdev_nvme_start_discovery", 00:07:50.785 "bdev_nvme_get_controller_health_info", 00:07:50.785 "bdev_nvme_disable_controller", 00:07:50.785 "bdev_nvme_enable_controller", 00:07:50.785 "bdev_nvme_reset_controller", 00:07:50.785 "bdev_nvme_get_transport_statistics", 00:07:50.785 "bdev_nvme_apply_firmware", 00:07:50.785 "bdev_nvme_detach_controller", 00:07:50.785 "bdev_nvme_get_controllers", 00:07:50.785 "bdev_nvme_attach_controller", 00:07:50.785 "bdev_nvme_set_hotplug", 00:07:50.785 "bdev_nvme_set_options", 00:07:50.785 "bdev_passthru_delete", 00:07:50.785 "bdev_passthru_create", 00:07:50.785 "bdev_lvol_set_parent_bdev", 00:07:50.785 "bdev_lvol_set_parent", 00:07:50.785 "bdev_lvol_check_shallow_copy", 00:07:50.785 "bdev_lvol_start_shallow_copy", 00:07:50.785 "bdev_lvol_grow_lvstore", 00:07:50.785 "bdev_lvol_get_lvols", 00:07:50.785 "bdev_lvol_get_lvstores", 00:07:50.785 "bdev_lvol_delete", 00:07:50.785 "bdev_lvol_set_read_only", 00:07:50.785 "bdev_lvol_resize", 00:07:50.785 "bdev_lvol_decouple_parent", 00:07:50.785 "bdev_lvol_inflate", 00:07:50.785 "bdev_lvol_rename", 00:07:50.785 "bdev_lvol_clone_bdev", 00:07:50.785 "bdev_lvol_clone", 00:07:50.785 "bdev_lvol_snapshot", 00:07:50.785 "bdev_lvol_create", 00:07:50.785 "bdev_lvol_delete_lvstore", 00:07:50.785 "bdev_lvol_rename_lvstore", 00:07:50.785 "bdev_lvol_create_lvstore", 00:07:50.785 "bdev_raid_set_options", 00:07:50.785 "bdev_raid_remove_base_bdev", 00:07:50.785 "bdev_raid_add_base_bdev", 00:07:50.785 "bdev_raid_delete", 00:07:50.785 "bdev_raid_create", 00:07:50.785 "bdev_raid_get_bdevs", 00:07:50.785 "bdev_error_inject_error", 00:07:50.785 "bdev_error_delete", 00:07:50.785 "bdev_error_create", 00:07:50.785 "bdev_split_delete", 00:07:50.785 "bdev_split_create", 00:07:50.785 "bdev_delay_delete", 00:07:50.785 "bdev_delay_create", 00:07:50.785 "bdev_delay_update_latency", 00:07:50.785 "bdev_zone_block_delete", 00:07:50.785 "bdev_zone_block_create", 00:07:50.785 "blobfs_create", 00:07:50.785 "blobfs_detect", 00:07:50.785 "blobfs_set_cache_size", 00:07:50.785 "bdev_aio_delete", 00:07:50.785 "bdev_aio_rescan", 00:07:50.785 "bdev_aio_create", 00:07:50.785 "bdev_ftl_set_property", 00:07:50.785 "bdev_ftl_get_properties", 00:07:50.785 "bdev_ftl_get_stats", 00:07:50.785 "bdev_ftl_unmap", 00:07:50.785 "bdev_ftl_unload", 00:07:50.785 "bdev_ftl_delete", 00:07:50.785 "bdev_ftl_load", 00:07:50.785 "bdev_ftl_create", 00:07:50.785 "bdev_virtio_attach_controller", 00:07:50.785 "bdev_virtio_scsi_get_devices", 00:07:50.785 "bdev_virtio_detach_controller", 00:07:50.785 "bdev_virtio_blk_set_hotplug", 00:07:50.785 "bdev_iscsi_delete", 00:07:50.785 "bdev_iscsi_create", 00:07:50.785 "bdev_iscsi_set_options", 00:07:50.785 "accel_error_inject_error", 00:07:50.785 "ioat_scan_accel_module", 00:07:50.785 "dsa_scan_accel_module", 00:07:50.785 "iaa_scan_accel_module", 00:07:50.785 "keyring_file_remove_key", 00:07:50.785 "keyring_file_add_key", 00:07:50.785 "keyring_linux_set_options", 00:07:50.785 "fsdev_aio_delete", 00:07:50.785 "fsdev_aio_create", 00:07:50.785 "iscsi_get_histogram", 00:07:50.785 "iscsi_enable_histogram", 00:07:50.785 "iscsi_set_options", 00:07:50.785 "iscsi_get_auth_groups", 00:07:50.785 "iscsi_auth_group_remove_secret", 00:07:50.785 "iscsi_auth_group_add_secret", 00:07:50.785 "iscsi_delete_auth_group", 00:07:50.785 "iscsi_create_auth_group", 00:07:50.785 "iscsi_set_discovery_auth", 00:07:50.785 "iscsi_get_options", 00:07:50.785 "iscsi_target_node_request_logout", 00:07:50.785 "iscsi_target_node_set_redirect", 00:07:50.785 "iscsi_target_node_set_auth", 00:07:50.785 "iscsi_target_node_add_lun", 00:07:50.785 "iscsi_get_stats", 00:07:50.785 "iscsi_get_connections", 00:07:50.785 "iscsi_portal_group_set_auth", 00:07:50.785 "iscsi_start_portal_group", 00:07:50.785 "iscsi_delete_portal_group", 00:07:50.785 "iscsi_create_portal_group", 00:07:50.785 "iscsi_get_portal_groups", 00:07:50.785 "iscsi_delete_target_node", 00:07:50.785 "iscsi_target_node_remove_pg_ig_maps", 00:07:50.785 "iscsi_target_node_add_pg_ig_maps", 00:07:50.785 "iscsi_create_target_node", 00:07:50.785 "iscsi_get_target_nodes", 00:07:50.785 "iscsi_delete_initiator_group", 00:07:50.785 "iscsi_initiator_group_remove_initiators", 00:07:50.785 "iscsi_initiator_group_add_initiators", 00:07:50.785 "iscsi_create_initiator_group", 00:07:50.785 "iscsi_get_initiator_groups", 00:07:50.785 "nvmf_set_crdt", 00:07:50.785 "nvmf_set_config", 00:07:50.785 "nvmf_set_max_subsystems", 00:07:50.785 "nvmf_stop_mdns_prr", 00:07:50.785 "nvmf_publish_mdns_prr", 00:07:50.785 "nvmf_subsystem_get_listeners", 00:07:50.785 "nvmf_subsystem_get_qpairs", 00:07:50.785 "nvmf_subsystem_get_controllers", 00:07:50.785 "nvmf_get_stats", 00:07:50.785 "nvmf_get_transports", 00:07:50.785 "nvmf_create_transport", 00:07:50.785 "nvmf_get_targets", 00:07:50.785 "nvmf_delete_target", 00:07:50.785 "nvmf_create_target", 00:07:50.785 "nvmf_subsystem_allow_any_host", 00:07:50.785 "nvmf_subsystem_set_keys", 00:07:50.785 "nvmf_subsystem_remove_host", 00:07:50.785 "nvmf_subsystem_add_host", 00:07:50.785 "nvmf_ns_remove_host", 00:07:50.785 "nvmf_ns_add_host", 00:07:50.785 "nvmf_subsystem_remove_ns", 00:07:50.785 "nvmf_subsystem_set_ns_ana_group", 00:07:50.785 "nvmf_subsystem_add_ns", 00:07:50.785 "nvmf_subsystem_listener_set_ana_state", 00:07:50.785 "nvmf_discovery_get_referrals", 00:07:50.785 "nvmf_discovery_remove_referral", 00:07:50.785 "nvmf_discovery_add_referral", 00:07:50.785 "nvmf_subsystem_remove_listener", 00:07:50.785 "nvmf_subsystem_add_listener", 00:07:50.785 "nvmf_delete_subsystem", 00:07:50.785 "nvmf_create_subsystem", 00:07:50.785 "nvmf_get_subsystems", 00:07:50.785 "env_dpdk_get_mem_stats", 00:07:50.785 "nbd_get_disks", 00:07:50.785 "nbd_stop_disk", 00:07:50.785 "nbd_start_disk", 00:07:50.785 "ublk_recover_disk", 00:07:50.785 "ublk_get_disks", 00:07:50.785 "ublk_stop_disk", 00:07:50.785 "ublk_start_disk", 00:07:50.785 "ublk_destroy_target", 00:07:50.785 "ublk_create_target", 00:07:50.785 "virtio_blk_create_transport", 00:07:50.785 "virtio_blk_get_transports", 00:07:50.785 "vhost_controller_set_coalescing", 00:07:50.785 "vhost_get_controllers", 00:07:50.785 "vhost_delete_controller", 00:07:50.785 "vhost_create_blk_controller", 00:07:50.785 "vhost_scsi_controller_remove_target", 00:07:50.785 "vhost_scsi_controller_add_target", 00:07:50.785 "vhost_start_scsi_controller", 00:07:50.785 "vhost_create_scsi_controller", 00:07:50.785 "thread_set_cpumask", 00:07:50.785 "scheduler_set_options", 00:07:50.785 "framework_get_governor", 00:07:50.785 "framework_get_scheduler", 00:07:50.785 "framework_set_scheduler", 00:07:50.785 "framework_get_reactors", 00:07:50.785 "thread_get_io_channels", 00:07:50.785 "thread_get_pollers", 00:07:50.785 "thread_get_stats", 00:07:50.785 "framework_monitor_context_switch", 00:07:50.785 "spdk_kill_instance", 00:07:50.785 "log_enable_timestamps", 00:07:50.785 "log_get_flags", 00:07:50.785 "log_clear_flag", 00:07:50.785 "log_set_flag", 00:07:50.785 "log_get_level", 00:07:50.785 "log_set_level", 00:07:50.785 "log_get_print_level", 00:07:50.785 "log_set_print_level", 00:07:50.785 "framework_enable_cpumask_locks", 00:07:50.785 "framework_disable_cpumask_locks", 00:07:50.785 "framework_wait_init", 00:07:50.785 "framework_start_init", 00:07:50.785 "scsi_get_devices", 00:07:50.785 "bdev_get_histogram", 00:07:50.785 "bdev_enable_histogram", 00:07:50.785 "bdev_set_qos_limit", 00:07:50.785 "bdev_set_qd_sampling_period", 00:07:50.786 "bdev_get_bdevs", 00:07:50.786 "bdev_reset_iostat", 00:07:50.786 "bdev_get_iostat", 00:07:50.786 "bdev_examine", 00:07:50.786 "bdev_wait_for_examine", 00:07:50.786 "bdev_set_options", 00:07:50.786 "accel_get_stats", 00:07:50.786 "accel_set_options", 00:07:50.786 "accel_set_driver", 00:07:50.786 "accel_crypto_key_destroy", 00:07:50.786 "accel_crypto_keys_get", 00:07:50.786 "accel_crypto_key_create", 00:07:50.786 "accel_assign_opc", 00:07:50.786 "accel_get_module_info", 00:07:50.786 "accel_get_opc_assignments", 00:07:50.786 "vmd_rescan", 00:07:50.786 "vmd_remove_device", 00:07:50.786 "vmd_enable", 00:07:50.786 "sock_get_default_impl", 00:07:50.786 "sock_set_default_impl", 00:07:50.786 "sock_impl_set_options", 00:07:50.786 "sock_impl_get_options", 00:07:50.786 "iobuf_get_stats", 00:07:50.786 "iobuf_set_options", 00:07:50.786 "keyring_get_keys", 00:07:50.786 "framework_get_pci_devices", 00:07:50.786 "framework_get_config", 00:07:50.786 "framework_get_subsystems", 00:07:50.786 "fsdev_set_opts", 00:07:50.786 "fsdev_get_opts", 00:07:50.786 "trace_get_info", 00:07:50.786 "trace_get_tpoint_group_mask", 00:07:50.786 "trace_disable_tpoint_group", 00:07:50.786 "trace_enable_tpoint_group", 00:07:50.786 "trace_clear_tpoint_mask", 00:07:50.786 "trace_set_tpoint_mask", 00:07:50.786 "notify_get_notifications", 00:07:50.786 "notify_get_types", 00:07:50.786 "spdk_get_version", 00:07:50.786 "rpc_get_methods" 00:07:50.786 ] 00:07:50.786 18:06:16 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:50.786 18:06:16 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:50.786 18:06:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:51.044 18:06:16 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:51.044 18:06:16 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57869 00:07:51.044 18:06:16 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57869 ']' 00:07:51.044 18:06:16 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57869 00:07:51.044 18:06:16 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:07:51.044 18:06:16 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:51.044 18:06:16 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57869 00:07:51.044 18:06:16 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:51.044 18:06:16 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:51.044 killing process with pid 57869 00:07:51.044 18:06:16 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57869' 00:07:51.044 18:06:16 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57869 00:07:51.044 18:06:16 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57869 00:07:53.582 00:07:53.582 real 0m4.227s 00:07:53.582 user 0m7.709s 00:07:53.582 sys 0m0.685s 00:07:53.582 18:06:18 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.582 18:06:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:53.582 ************************************ 00:07:53.582 END TEST spdkcli_tcp 00:07:53.582 ************************************ 00:07:53.582 18:06:18 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:53.582 18:06:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:53.582 18:06:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.582 18:06:18 -- common/autotest_common.sh@10 -- # set +x 00:07:53.582 ************************************ 00:07:53.582 START TEST dpdk_mem_utility 00:07:53.582 ************************************ 00:07:53.582 18:06:18 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:53.582 * Looking for test storage... 00:07:53.582 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:07:53.582 18:06:18 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:53.582 18:06:18 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:07:53.582 18:06:18 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:53.582 18:06:18 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:53.582 18:06:18 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:53.582 18:06:18 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:53.582 18:06:18 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:53.582 18:06:18 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:07:53.582 18:06:18 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:07:53.582 18:06:18 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:07:53.582 18:06:18 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:07:53.582 18:06:18 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:07:53.582 18:06:18 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:07:53.582 18:06:18 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:07:53.582 18:06:18 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:53.582 18:06:18 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:07:53.582 18:06:18 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:07:53.582 18:06:18 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:53.582 18:06:18 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:53.582 18:06:18 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:07:53.582 18:06:18 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:07:53.582 18:06:18 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:53.582 18:06:18 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:07:53.582 18:06:18 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:07:53.582 18:06:18 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:07:53.582 18:06:18 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:07:53.582 18:06:18 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:53.582 18:06:18 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:07:53.582 18:06:18 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:07:53.582 18:06:18 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:53.582 18:06:18 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:53.582 18:06:18 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:07:53.582 18:06:18 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:53.582 18:06:18 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:53.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.582 --rc genhtml_branch_coverage=1 00:07:53.582 --rc genhtml_function_coverage=1 00:07:53.582 --rc genhtml_legend=1 00:07:53.582 --rc geninfo_all_blocks=1 00:07:53.582 --rc geninfo_unexecuted_blocks=1 00:07:53.582 00:07:53.582 ' 00:07:53.582 18:06:18 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:53.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.582 --rc genhtml_branch_coverage=1 00:07:53.582 --rc genhtml_function_coverage=1 00:07:53.582 --rc genhtml_legend=1 00:07:53.582 --rc geninfo_all_blocks=1 00:07:53.582 --rc geninfo_unexecuted_blocks=1 00:07:53.582 00:07:53.582 ' 00:07:53.582 18:06:18 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:53.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.582 --rc genhtml_branch_coverage=1 00:07:53.582 --rc genhtml_function_coverage=1 00:07:53.582 --rc genhtml_legend=1 00:07:53.582 --rc geninfo_all_blocks=1 00:07:53.582 --rc geninfo_unexecuted_blocks=1 00:07:53.582 00:07:53.582 ' 00:07:53.582 18:06:18 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:53.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.582 --rc genhtml_branch_coverage=1 00:07:53.582 --rc genhtml_function_coverage=1 00:07:53.582 --rc genhtml_legend=1 00:07:53.582 --rc geninfo_all_blocks=1 00:07:53.582 --rc geninfo_unexecuted_blocks=1 00:07:53.582 00:07:53.582 ' 00:07:53.582 18:06:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:53.582 18:06:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57992 00:07:53.582 18:06:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:53.582 18:06:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57992 00:07:53.582 18:06:18 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57992 ']' 00:07:53.582 18:06:18 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.582 18:06:18 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:53.582 18:06:18 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.582 18:06:18 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:53.582 18:06:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:53.582 [2024-12-06 18:06:18.945381] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:07:53.582 [2024-12-06 18:06:18.945567] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57992 ] 00:07:53.842 [2024-12-06 18:06:19.133951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.842 [2024-12-06 18:06:19.291831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.778 18:06:20 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:54.778 18:06:20 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:07:54.778 18:06:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:54.778 18:06:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:54.778 18:06:20 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.778 18:06:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:54.778 { 00:07:54.778 "filename": "/tmp/spdk_mem_dump.txt" 00:07:54.778 } 00:07:54.778 18:06:20 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.778 18:06:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:54.778 DPDK memory size 824.000000 MiB in 1 heap(s) 00:07:54.778 1 heaps totaling size 824.000000 MiB 00:07:54.778 size: 824.000000 MiB heap id: 0 00:07:54.778 end heaps---------- 00:07:54.778 9 mempools totaling size 603.782043 MiB 00:07:54.778 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:54.778 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:54.778 size: 100.555481 MiB name: bdev_io_57992 00:07:54.778 size: 50.003479 MiB name: msgpool_57992 00:07:54.778 size: 36.509338 MiB name: fsdev_io_57992 00:07:54.778 size: 21.763794 MiB name: PDU_Pool 00:07:54.778 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:54.778 size: 4.133484 MiB name: evtpool_57992 00:07:54.778 size: 0.026123 MiB name: Session_Pool 00:07:54.778 end mempools------- 00:07:54.778 6 memzones totaling size 4.142822 MiB 00:07:54.778 size: 1.000366 MiB name: RG_ring_0_57992 00:07:54.778 size: 1.000366 MiB name: RG_ring_1_57992 00:07:54.778 size: 1.000366 MiB name: RG_ring_4_57992 00:07:54.778 size: 1.000366 MiB name: RG_ring_5_57992 00:07:54.778 size: 0.125366 MiB name: RG_ring_2_57992 00:07:54.778 size: 0.015991 MiB name: RG_ring_3_57992 00:07:54.778 end memzones------- 00:07:54.778 18:06:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:07:55.037 heap id: 0 total size: 824.000000 MiB number of busy elements: 320 number of free elements: 18 00:07:55.037 list of free elements. size: 16.780151 MiB 00:07:55.037 element at address: 0x200006400000 with size: 1.995972 MiB 00:07:55.037 element at address: 0x20000a600000 with size: 1.995972 MiB 00:07:55.037 element at address: 0x200003e00000 with size: 1.991028 MiB 00:07:55.037 element at address: 0x200019500040 with size: 0.999939 MiB 00:07:55.037 element at address: 0x200019900040 with size: 0.999939 MiB 00:07:55.037 element at address: 0x200019a00000 with size: 0.999084 MiB 00:07:55.037 element at address: 0x200032600000 with size: 0.994324 MiB 00:07:55.037 element at address: 0x200000400000 with size: 0.992004 MiB 00:07:55.037 element at address: 0x200019200000 with size: 0.959656 MiB 00:07:55.037 element at address: 0x200019d00040 with size: 0.936401 MiB 00:07:55.037 element at address: 0x200000200000 with size: 0.716980 MiB 00:07:55.037 element at address: 0x20001b400000 with size: 0.561707 MiB 00:07:55.037 element at address: 0x200000c00000 with size: 0.489197 MiB 00:07:55.037 element at address: 0x200019600000 with size: 0.487976 MiB 00:07:55.037 element at address: 0x200019e00000 with size: 0.485413 MiB 00:07:55.037 element at address: 0x200012c00000 with size: 0.433228 MiB 00:07:55.037 element at address: 0x200028800000 with size: 0.390442 MiB 00:07:55.037 element at address: 0x200000800000 with size: 0.350891 MiB 00:07:55.037 list of standard malloc elements. size: 199.288940 MiB 00:07:55.037 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:07:55.037 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:07:55.037 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:07:55.037 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:07:55.037 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:07:55.038 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:07:55.038 element at address: 0x200019deff40 with size: 0.062683 MiB 00:07:55.038 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:07:55.038 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:07:55.038 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:07:55.038 element at address: 0x200012bff040 with size: 0.000305 MiB 00:07:55.038 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:07:55.038 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:07:55.038 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:07:55.038 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:07:55.038 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:07:55.038 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:07:55.038 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:07:55.038 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:07:55.038 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:07:55.038 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:07:55.038 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:07:55.038 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:07:55.038 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:07:55.038 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:07:55.038 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:07:55.038 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:07:55.038 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:07:55.038 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:07:55.038 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:07:55.038 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:07:55.038 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:07:55.038 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:07:55.038 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:07:55.038 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:07:55.038 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:07:55.038 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:07:55.038 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:07:55.038 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:07:55.038 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:07:55.038 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:07:55.038 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:07:55.038 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:07:55.038 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:07:55.038 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:07:55.038 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:07:55.038 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:07:55.038 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:07:55.038 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:07:55.038 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:07:55.038 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:07:55.038 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:07:55.038 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:07:55.038 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:07:55.038 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:07:55.038 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:07:55.038 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:07:55.038 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:07:55.038 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:07:55.038 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:07:55.038 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:07:55.038 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:07:55.038 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:07:55.038 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:07:55.038 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200000cff000 with size: 0.000244 MiB 00:07:55.038 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:07:55.038 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:07:55.038 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:07:55.038 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:07:55.038 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:07:55.038 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:07:55.038 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:07:55.038 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:07:55.038 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:07:55.038 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:07:55.038 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:07:55.038 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:07:55.038 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:07:55.038 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200012bff180 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200012bff280 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200012bff380 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200012bff480 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200012bff580 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200012bff680 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200012bff780 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200012bff880 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200012bff980 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:07:55.038 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:07:55.038 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:07:55.038 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:07:55.039 element at address: 0x200019affc40 with size: 0.000244 MiB 00:07:55.039 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:07:55.039 element at address: 0x200028863f40 with size: 0.000244 MiB 00:07:55.039 element at address: 0x200028864040 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20002886af80 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20002886b080 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20002886b180 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20002886b280 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20002886b380 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20002886b480 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20002886b580 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20002886b680 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20002886b780 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20002886b880 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20002886b980 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20002886be80 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20002886c080 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20002886c180 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20002886c280 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20002886c380 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20002886c480 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20002886c580 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20002886c680 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20002886c780 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20002886c880 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20002886c980 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:07:55.039 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:07:55.040 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:07:55.040 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:07:55.040 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:07:55.040 element at address: 0x20002886d080 with size: 0.000244 MiB 00:07:55.040 element at address: 0x20002886d180 with size: 0.000244 MiB 00:07:55.040 element at address: 0x20002886d280 with size: 0.000244 MiB 00:07:55.040 element at address: 0x20002886d380 with size: 0.000244 MiB 00:07:55.040 element at address: 0x20002886d480 with size: 0.000244 MiB 00:07:55.040 element at address: 0x20002886d580 with size: 0.000244 MiB 00:07:55.040 element at address: 0x20002886d680 with size: 0.000244 MiB 00:07:55.040 element at address: 0x20002886d780 with size: 0.000244 MiB 00:07:55.040 element at address: 0x20002886d880 with size: 0.000244 MiB 00:07:55.040 element at address: 0x20002886d980 with size: 0.000244 MiB 00:07:55.040 element at address: 0x20002886da80 with size: 0.000244 MiB 00:07:55.040 element at address: 0x20002886db80 with size: 0.000244 MiB 00:07:55.040 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:07:55.040 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:07:55.040 element at address: 0x20002886de80 with size: 0.000244 MiB 00:07:55.040 element at address: 0x20002886df80 with size: 0.000244 MiB 00:07:55.040 element at address: 0x20002886e080 with size: 0.000244 MiB 00:07:55.040 element at address: 0x20002886e180 with size: 0.000244 MiB 00:07:55.040 element at address: 0x20002886e280 with size: 0.000244 MiB 00:07:55.040 element at address: 0x20002886e380 with size: 0.000244 MiB 00:07:55.040 element at address: 0x20002886e480 with size: 0.000244 MiB 00:07:55.040 element at address: 0x20002886e580 with size: 0.000244 MiB 00:07:55.040 element at address: 0x20002886e680 with size: 0.000244 MiB 00:07:55.040 element at address: 0x20002886e780 with size: 0.000244 MiB 00:07:55.040 element at address: 0x20002886e880 with size: 0.000244 MiB 00:07:55.040 element at address: 0x20002886e980 with size: 0.000244 MiB 00:07:55.040 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:07:55.040 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:07:55.040 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:07:55.040 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:07:55.040 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:07:55.040 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:07:55.040 element at address: 0x20002886f080 with size: 0.000244 MiB 00:07:55.040 element at address: 0x20002886f180 with size: 0.000244 MiB 00:07:55.040 element at address: 0x20002886f280 with size: 0.000244 MiB 00:07:55.040 element at address: 0x20002886f380 with size: 0.000244 MiB 00:07:55.040 element at address: 0x20002886f480 with size: 0.000244 MiB 00:07:55.040 element at address: 0x20002886f580 with size: 0.000244 MiB 00:07:55.040 element at address: 0x20002886f680 with size: 0.000244 MiB 00:07:55.040 element at address: 0x20002886f780 with size: 0.000244 MiB 00:07:55.040 element at address: 0x20002886f880 with size: 0.000244 MiB 00:07:55.040 element at address: 0x20002886f980 with size: 0.000244 MiB 00:07:55.040 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:07:55.040 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:07:55.040 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:07:55.040 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:07:55.040 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:07:55.040 list of memzone associated elements. size: 607.930908 MiB 00:07:55.040 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:07:55.040 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:55.040 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:07:55.040 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:55.040 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:07:55.040 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_57992_0 00:07:55.040 element at address: 0x200000dff340 with size: 48.003113 MiB 00:07:55.040 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57992_0 00:07:55.040 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:07:55.040 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57992_0 00:07:55.040 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:07:55.040 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:55.040 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:07:55.040 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:55.040 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:07:55.040 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57992_0 00:07:55.040 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:07:55.040 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57992 00:07:55.040 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:07:55.040 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57992 00:07:55.040 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:07:55.040 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:55.040 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:07:55.040 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:55.040 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:07:55.040 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:55.040 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:07:55.040 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:55.040 element at address: 0x200000cff100 with size: 1.000549 MiB 00:07:55.040 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57992 00:07:55.040 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:07:55.040 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57992 00:07:55.040 element at address: 0x200019affd40 with size: 1.000549 MiB 00:07:55.040 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57992 00:07:55.040 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:07:55.040 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57992 00:07:55.040 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:07:55.040 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57992 00:07:55.040 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:07:55.040 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57992 00:07:55.040 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:07:55.040 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:55.040 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:07:55.040 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:55.040 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:07:55.041 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:55.041 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:07:55.041 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57992 00:07:55.041 element at address: 0x20000085df80 with size: 0.125549 MiB 00:07:55.041 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57992 00:07:55.041 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:07:55.041 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:55.041 element at address: 0x200028864140 with size: 0.023804 MiB 00:07:55.041 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:55.041 element at address: 0x200000859d40 with size: 0.016174 MiB 00:07:55.041 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57992 00:07:55.041 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:07:55.041 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:55.041 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:07:55.041 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57992 00:07:55.041 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:07:55.041 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57992 00:07:55.041 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:07:55.041 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57992 00:07:55.041 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:07:55.041 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:55.041 18:06:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:55.041 18:06:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57992 00:07:55.041 18:06:20 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57992 ']' 00:07:55.041 18:06:20 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57992 00:07:55.041 18:06:20 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:07:55.041 18:06:20 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:55.041 18:06:20 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57992 00:07:55.041 18:06:20 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:55.041 killing process with pid 57992 00:07:55.041 18:06:20 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:55.041 18:06:20 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57992' 00:07:55.041 18:06:20 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57992 00:07:55.041 18:06:20 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57992 00:07:57.569 00:07:57.569 real 0m3.907s 00:07:57.569 user 0m3.965s 00:07:57.569 sys 0m0.607s 00:07:57.569 18:06:22 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.569 ************************************ 00:07:57.569 END TEST dpdk_mem_utility 00:07:57.569 ************************************ 00:07:57.569 18:06:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:57.569 18:06:22 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:57.569 18:06:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:57.569 18:06:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.569 18:06:22 -- common/autotest_common.sh@10 -- # set +x 00:07:57.569 ************************************ 00:07:57.569 START TEST event 00:07:57.569 ************************************ 00:07:57.569 18:06:22 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:57.569 * Looking for test storage... 00:07:57.569 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:57.569 18:06:22 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:57.569 18:06:22 event -- common/autotest_common.sh@1711 -- # lcov --version 00:07:57.569 18:06:22 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:57.569 18:06:22 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:57.569 18:06:22 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:57.569 18:06:22 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:57.569 18:06:22 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:57.569 18:06:22 event -- scripts/common.sh@336 -- # IFS=.-: 00:07:57.569 18:06:22 event -- scripts/common.sh@336 -- # read -ra ver1 00:07:57.569 18:06:22 event -- scripts/common.sh@337 -- # IFS=.-: 00:07:57.569 18:06:22 event -- scripts/common.sh@337 -- # read -ra ver2 00:07:57.569 18:06:22 event -- scripts/common.sh@338 -- # local 'op=<' 00:07:57.569 18:06:22 event -- scripts/common.sh@340 -- # ver1_l=2 00:07:57.569 18:06:22 event -- scripts/common.sh@341 -- # ver2_l=1 00:07:57.569 18:06:22 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:57.569 18:06:22 event -- scripts/common.sh@344 -- # case "$op" in 00:07:57.569 18:06:22 event -- scripts/common.sh@345 -- # : 1 00:07:57.569 18:06:22 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:57.569 18:06:22 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:57.569 18:06:22 event -- scripts/common.sh@365 -- # decimal 1 00:07:57.569 18:06:22 event -- scripts/common.sh@353 -- # local d=1 00:07:57.569 18:06:22 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:57.569 18:06:22 event -- scripts/common.sh@355 -- # echo 1 00:07:57.569 18:06:22 event -- scripts/common.sh@365 -- # ver1[v]=1 00:07:57.569 18:06:22 event -- scripts/common.sh@366 -- # decimal 2 00:07:57.569 18:06:22 event -- scripts/common.sh@353 -- # local d=2 00:07:57.569 18:06:22 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:57.569 18:06:22 event -- scripts/common.sh@355 -- # echo 2 00:07:57.569 18:06:22 event -- scripts/common.sh@366 -- # ver2[v]=2 00:07:57.569 18:06:22 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:57.569 18:06:22 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:57.569 18:06:22 event -- scripts/common.sh@368 -- # return 0 00:07:57.569 18:06:22 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:57.569 18:06:22 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:57.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.569 --rc genhtml_branch_coverage=1 00:07:57.569 --rc genhtml_function_coverage=1 00:07:57.569 --rc genhtml_legend=1 00:07:57.569 --rc geninfo_all_blocks=1 00:07:57.569 --rc geninfo_unexecuted_blocks=1 00:07:57.569 00:07:57.569 ' 00:07:57.569 18:06:22 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:57.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.570 --rc genhtml_branch_coverage=1 00:07:57.570 --rc genhtml_function_coverage=1 00:07:57.570 --rc genhtml_legend=1 00:07:57.570 --rc geninfo_all_blocks=1 00:07:57.570 --rc geninfo_unexecuted_blocks=1 00:07:57.570 00:07:57.570 ' 00:07:57.570 18:06:22 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:57.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.570 --rc genhtml_branch_coverage=1 00:07:57.570 --rc genhtml_function_coverage=1 00:07:57.570 --rc genhtml_legend=1 00:07:57.570 --rc geninfo_all_blocks=1 00:07:57.570 --rc geninfo_unexecuted_blocks=1 00:07:57.570 00:07:57.570 ' 00:07:57.570 18:06:22 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:57.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.570 --rc genhtml_branch_coverage=1 00:07:57.570 --rc genhtml_function_coverage=1 00:07:57.570 --rc genhtml_legend=1 00:07:57.570 --rc geninfo_all_blocks=1 00:07:57.570 --rc geninfo_unexecuted_blocks=1 00:07:57.570 00:07:57.570 ' 00:07:57.570 18:06:22 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:57.570 18:06:22 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:57.570 18:06:22 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:57.570 18:06:22 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:57.570 18:06:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.570 18:06:22 event -- common/autotest_common.sh@10 -- # set +x 00:07:57.570 ************************************ 00:07:57.570 START TEST event_perf 00:07:57.570 ************************************ 00:07:57.570 18:06:22 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:57.570 Running I/O for 1 seconds...[2024-12-06 18:06:22.841197] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:07:57.570 [2024-12-06 18:06:22.841402] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58095 ] 00:07:57.570 [2024-12-06 18:06:23.027007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:57.828 [2024-12-06 18:06:23.162046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.828 [2024-12-06 18:06:23.162196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:57.828 Running I/O for 1 seconds...[2024-12-06 18:06:23.163409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:57.828 [2024-12-06 18:06:23.163421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.204 00:07:59.204 lcore 0: 197104 00:07:59.204 lcore 1: 197104 00:07:59.204 lcore 2: 197105 00:07:59.204 lcore 3: 197104 00:07:59.204 done. 00:07:59.204 00:07:59.204 real 0m1.606s 00:07:59.204 user 0m4.355s 00:07:59.204 sys 0m0.124s 00:07:59.204 18:06:24 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.204 18:06:24 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:59.204 ************************************ 00:07:59.204 END TEST event_perf 00:07:59.204 ************************************ 00:07:59.204 18:06:24 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:59.204 18:06:24 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:59.204 18:06:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.204 18:06:24 event -- common/autotest_common.sh@10 -- # set +x 00:07:59.204 ************************************ 00:07:59.204 START TEST event_reactor 00:07:59.204 ************************************ 00:07:59.204 18:06:24 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:59.204 [2024-12-06 18:06:24.493301] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:07:59.204 [2024-12-06 18:06:24.493549] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58134 ] 00:07:59.204 [2024-12-06 18:06:24.685825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.463 [2024-12-06 18:06:24.806334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.837 test_start 00:08:00.837 oneshot 00:08:00.837 tick 100 00:08:00.837 tick 100 00:08:00.837 tick 250 00:08:00.837 tick 100 00:08:00.837 tick 100 00:08:00.837 tick 100 00:08:00.837 tick 250 00:08:00.837 tick 500 00:08:00.837 tick 100 00:08:00.837 tick 100 00:08:00.837 tick 250 00:08:00.837 tick 100 00:08:00.837 tick 100 00:08:00.837 test_end 00:08:00.837 00:08:00.837 real 0m1.630s 00:08:00.837 user 0m1.415s 00:08:00.837 sys 0m0.106s 00:08:00.837 18:06:26 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.837 18:06:26 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:08:00.837 ************************************ 00:08:00.837 END TEST event_reactor 00:08:00.837 ************************************ 00:08:00.837 18:06:26 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:00.837 18:06:26 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:00.837 18:06:26 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.837 18:06:26 event -- common/autotest_common.sh@10 -- # set +x 00:08:00.837 ************************************ 00:08:00.837 START TEST event_reactor_perf 00:08:00.837 ************************************ 00:08:00.837 18:06:26 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:00.838 [2024-12-06 18:06:26.166437] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:08:00.838 [2024-12-06 18:06:26.166587] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58176 ] 00:08:00.838 [2024-12-06 18:06:26.338045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.097 [2024-12-06 18:06:26.465219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.506 test_start 00:08:02.506 test_end 00:08:02.506 Performance: 282834 events per second 00:08:02.506 00:08:02.506 real 0m1.565s 00:08:02.506 user 0m1.376s 00:08:02.506 sys 0m0.080s 00:08:02.506 18:06:27 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.506 18:06:27 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:08:02.506 ************************************ 00:08:02.506 END TEST event_reactor_perf 00:08:02.506 ************************************ 00:08:02.506 18:06:27 event -- event/event.sh@49 -- # uname -s 00:08:02.506 18:06:27 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:02.506 18:06:27 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:02.506 18:06:27 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:02.506 18:06:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.506 18:06:27 event -- common/autotest_common.sh@10 -- # set +x 00:08:02.506 ************************************ 00:08:02.506 START TEST event_scheduler 00:08:02.506 ************************************ 00:08:02.506 18:06:27 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:02.506 * Looking for test storage... 00:08:02.506 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:08:02.506 18:06:27 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:02.506 18:06:27 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:08:02.506 18:06:27 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:02.506 18:06:27 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:02.506 18:06:27 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:02.506 18:06:27 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:02.506 18:06:27 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:02.506 18:06:27 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:08:02.506 18:06:27 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:08:02.506 18:06:27 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:08:02.506 18:06:27 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:08:02.506 18:06:27 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:08:02.506 18:06:27 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:08:02.506 18:06:27 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:08:02.506 18:06:27 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:02.506 18:06:27 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:08:02.506 18:06:27 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:08:02.506 18:06:27 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:02.506 18:06:27 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:02.506 18:06:27 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:08:02.506 18:06:27 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:08:02.506 18:06:27 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:02.506 18:06:27 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:08:02.506 18:06:27 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:08:02.506 18:06:27 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:08:02.506 18:06:27 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:08:02.506 18:06:27 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:02.506 18:06:27 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:08:02.506 18:06:27 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:08:02.506 18:06:27 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:02.506 18:06:27 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:02.506 18:06:27 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:08:02.506 18:06:27 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:02.506 18:06:27 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:02.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.506 --rc genhtml_branch_coverage=1 00:08:02.506 --rc genhtml_function_coverage=1 00:08:02.506 --rc genhtml_legend=1 00:08:02.506 --rc geninfo_all_blocks=1 00:08:02.506 --rc geninfo_unexecuted_blocks=1 00:08:02.506 00:08:02.506 ' 00:08:02.506 18:06:27 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:02.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.506 --rc genhtml_branch_coverage=1 00:08:02.506 --rc genhtml_function_coverage=1 00:08:02.506 --rc genhtml_legend=1 00:08:02.506 --rc geninfo_all_blocks=1 00:08:02.506 --rc geninfo_unexecuted_blocks=1 00:08:02.506 00:08:02.506 ' 00:08:02.506 18:06:27 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:02.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.506 --rc genhtml_branch_coverage=1 00:08:02.506 --rc genhtml_function_coverage=1 00:08:02.506 --rc genhtml_legend=1 00:08:02.506 --rc geninfo_all_blocks=1 00:08:02.506 --rc geninfo_unexecuted_blocks=1 00:08:02.506 00:08:02.506 ' 00:08:02.506 18:06:27 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:02.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.506 --rc genhtml_branch_coverage=1 00:08:02.506 --rc genhtml_function_coverage=1 00:08:02.506 --rc genhtml_legend=1 00:08:02.506 --rc geninfo_all_blocks=1 00:08:02.506 --rc geninfo_unexecuted_blocks=1 00:08:02.506 00:08:02.506 ' 00:08:02.506 18:06:27 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:02.506 18:06:27 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58247 00:08:02.506 18:06:27 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:02.506 18:06:27 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:02.506 18:06:27 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58247 00:08:02.506 18:06:27 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58247 ']' 00:08:02.506 18:06:27 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.507 18:06:27 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:02.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.507 18:06:27 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.507 18:06:27 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:02.507 18:06:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:02.766 [2024-12-06 18:06:28.036404] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:08:02.766 [2024-12-06 18:06:28.037094] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58247 ] 00:08:02.766 [2024-12-06 18:06:28.221135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:03.024 [2024-12-06 18:06:28.417059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.024 [2024-12-06 18:06:28.417167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.024 [2024-12-06 18:06:28.417265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:03.024 [2024-12-06 18:06:28.417266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:03.602 18:06:29 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:03.602 18:06:29 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:08:03.602 18:06:29 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:03.602 18:06:29 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.602 18:06:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:03.602 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:03.602 POWER: Cannot set governor of lcore 0 to userspace 00:08:03.602 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:03.602 POWER: Cannot set governor of lcore 0 to performance 00:08:03.602 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:03.602 POWER: Cannot set governor of lcore 0 to userspace 00:08:03.602 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:03.602 POWER: Cannot set governor of lcore 0 to userspace 00:08:03.602 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:08:03.602 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:08:03.602 POWER: Unable to set Power Management Environment for lcore 0 00:08:03.602 [2024-12-06 18:06:29.032587] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:08:03.602 [2024-12-06 18:06:29.032627] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:08:03.602 [2024-12-06 18:06:29.032642] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:08:03.602 [2024-12-06 18:06:29.032669] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:03.602 [2024-12-06 18:06:29.032682] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:03.602 [2024-12-06 18:06:29.032697] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:03.602 18:06:29 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.602 18:06:29 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:03.602 18:06:29 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.602 18:06:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:04.178 [2024-12-06 18:06:29.442243] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:04.178 18:06:29 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.178 18:06:29 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:04.178 18:06:29 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:04.178 18:06:29 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.178 18:06:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:04.178 ************************************ 00:08:04.178 START TEST scheduler_create_thread 00:08:04.178 ************************************ 00:08:04.178 18:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:08:04.178 18:06:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:04.178 18:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.178 18:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:04.178 2 00:08:04.178 18:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.178 18:06:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:04.178 18:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.178 18:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:04.178 3 00:08:04.178 18:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.178 18:06:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:04.178 18:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.178 18:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:04.178 4 00:08:04.178 18:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.178 18:06:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:04.178 18:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.178 18:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:04.178 5 00:08:04.178 18:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.178 18:06:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:04.178 18:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.178 18:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:04.178 6 00:08:04.178 18:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.178 18:06:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:04.178 18:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.178 18:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:04.178 7 00:08:04.178 18:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.178 18:06:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:04.178 18:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.178 18:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:04.178 8 00:08:04.178 18:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.178 18:06:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:04.178 18:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.178 18:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:04.178 9 00:08:04.178 18:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.178 18:06:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:04.178 18:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.178 18:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:04.178 10 00:08:04.178 18:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.178 18:06:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:04.178 18:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.178 18:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:04.178 18:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.178 18:06:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:04.178 18:06:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:04.178 18:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.178 18:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:04.178 18:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.178 18:06:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:04.178 18:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.178 18:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:05.114 18:06:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.114 18:06:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:05.114 18:06:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:05.114 18:06:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.114 18:06:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:06.100 18:06:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.100 00:08:06.100 real 0m2.139s 00:08:06.100 user 0m0.018s 00:08:06.100 sys 0m0.005s 00:08:06.100 18:06:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.100 18:06:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:06.100 ************************************ 00:08:06.100 END TEST scheduler_create_thread 00:08:06.100 ************************************ 00:08:06.359 18:06:31 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:06.359 18:06:31 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58247 00:08:06.359 18:06:31 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58247 ']' 00:08:06.359 18:06:31 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58247 00:08:06.359 18:06:31 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:08:06.359 18:06:31 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:06.359 18:06:31 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58247 00:08:06.359 18:06:31 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:08:06.359 18:06:31 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:08:06.359 killing process with pid 58247 00:08:06.359 18:06:31 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58247' 00:08:06.359 18:06:31 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58247 00:08:06.359 18:06:31 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58247 00:08:06.618 [2024-12-06 18:06:32.072374] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:08.000 00:08:08.000 real 0m5.440s 00:08:08.000 user 0m9.705s 00:08:08.000 sys 0m0.518s 00:08:08.000 18:06:33 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.000 18:06:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:08.000 ************************************ 00:08:08.000 END TEST event_scheduler 00:08:08.000 ************************************ 00:08:08.000 18:06:33 event -- event/event.sh@51 -- # modprobe -n nbd 00:08:08.000 18:06:33 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:08.000 18:06:33 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:08.000 18:06:33 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.000 18:06:33 event -- common/autotest_common.sh@10 -- # set +x 00:08:08.000 ************************************ 00:08:08.000 START TEST app_repeat 00:08:08.000 ************************************ 00:08:08.000 18:06:33 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:08:08.000 18:06:33 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.000 18:06:33 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:08.000 18:06:33 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:08:08.000 18:06:33 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:08.000 18:06:33 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:08:08.000 18:06:33 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:08:08.000 18:06:33 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:08:08.000 18:06:33 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58353 00:08:08.000 18:06:33 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:08.000 18:06:33 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:08.000 18:06:33 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58353' 00:08:08.000 Process app_repeat pid: 58353 00:08:08.000 18:06:33 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:08.000 spdk_app_start Round 0 00:08:08.000 18:06:33 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:08.000 18:06:33 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58353 /var/tmp/spdk-nbd.sock 00:08:08.000 18:06:33 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58353 ']' 00:08:08.000 18:06:33 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:08.000 18:06:33 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:08.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:08.000 18:06:33 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:08.000 18:06:33 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:08.000 18:06:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:08.000 [2024-12-06 18:06:33.282796] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:08:08.000 [2024-12-06 18:06:33.282959] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58353 ] 00:08:08.000 [2024-12-06 18:06:33.458668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:08.258 [2024-12-06 18:06:33.587173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.258 [2024-12-06 18:06:33.587177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:08.826 18:06:34 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:08.826 18:06:34 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:08.826 18:06:34 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:09.393 Malloc0 00:08:09.393 18:06:34 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:09.651 Malloc1 00:08:09.651 18:06:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:09.651 18:06:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:09.651 18:06:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:09.651 18:06:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:09.651 18:06:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:09.651 18:06:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:09.651 18:06:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:09.651 18:06:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:09.651 18:06:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:09.651 18:06:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:09.651 18:06:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:09.651 18:06:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:09.651 18:06:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:09.651 18:06:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:09.651 18:06:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:09.651 18:06:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:09.908 /dev/nbd0 00:08:09.908 18:06:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:09.908 18:06:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:09.908 18:06:35 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:09.908 18:06:35 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:09.908 18:06:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:09.908 18:06:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:09.908 18:06:35 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:09.908 18:06:35 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:09.908 18:06:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:09.908 18:06:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:09.908 18:06:35 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:09.908 1+0 records in 00:08:09.908 1+0 records out 00:08:09.908 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247916 s, 16.5 MB/s 00:08:09.909 18:06:35 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:09.909 18:06:35 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:09.909 18:06:35 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:09.909 18:06:35 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:09.909 18:06:35 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:09.909 18:06:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:09.909 18:06:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:09.909 18:06:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:10.166 /dev/nbd1 00:08:10.166 18:06:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:10.166 18:06:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:10.166 18:06:35 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:10.166 18:06:35 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:10.166 18:06:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:10.166 18:06:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:10.166 18:06:35 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:10.166 18:06:35 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:10.166 18:06:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:10.166 18:06:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:10.166 18:06:35 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:10.166 1+0 records in 00:08:10.166 1+0 records out 00:08:10.166 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000425689 s, 9.6 MB/s 00:08:10.166 18:06:35 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:10.166 18:06:35 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:10.166 18:06:35 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:10.166 18:06:35 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:10.166 18:06:35 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:10.166 18:06:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:10.166 18:06:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:10.166 18:06:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:10.166 18:06:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:10.167 18:06:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:10.733 18:06:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:10.733 { 00:08:10.733 "nbd_device": "/dev/nbd0", 00:08:10.733 "bdev_name": "Malloc0" 00:08:10.733 }, 00:08:10.733 { 00:08:10.733 "nbd_device": "/dev/nbd1", 00:08:10.733 "bdev_name": "Malloc1" 00:08:10.733 } 00:08:10.733 ]' 00:08:10.733 18:06:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:10.733 { 00:08:10.733 "nbd_device": "/dev/nbd0", 00:08:10.733 "bdev_name": "Malloc0" 00:08:10.733 }, 00:08:10.733 { 00:08:10.733 "nbd_device": "/dev/nbd1", 00:08:10.733 "bdev_name": "Malloc1" 00:08:10.733 } 00:08:10.733 ]' 00:08:10.733 18:06:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:10.733 18:06:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:10.733 /dev/nbd1' 00:08:10.733 18:06:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:10.733 /dev/nbd1' 00:08:10.733 18:06:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:10.733 18:06:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:10.733 18:06:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:10.733 18:06:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:10.733 18:06:35 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:10.733 18:06:35 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:10.733 18:06:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:10.733 18:06:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:10.733 18:06:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:10.733 18:06:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:10.733 18:06:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:10.733 18:06:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:10.733 256+0 records in 00:08:10.733 256+0 records out 00:08:10.733 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00717098 s, 146 MB/s 00:08:10.733 18:06:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:10.733 18:06:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:10.733 256+0 records in 00:08:10.733 256+0 records out 00:08:10.733 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0269376 s, 38.9 MB/s 00:08:10.733 18:06:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:10.733 18:06:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:10.733 256+0 records in 00:08:10.733 256+0 records out 00:08:10.733 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0379865 s, 27.6 MB/s 00:08:10.733 18:06:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:10.733 18:06:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:10.733 18:06:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:10.733 18:06:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:10.733 18:06:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:10.733 18:06:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:10.733 18:06:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:10.733 18:06:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.733 18:06:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:10.733 18:06:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.733 18:06:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:10.733 18:06:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:10.733 18:06:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:10.733 18:06:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:10.733 18:06:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:10.733 18:06:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:10.733 18:06:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:10.733 18:06:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:10.733 18:06:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:10.990 18:06:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:10.990 18:06:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:10.990 18:06:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:10.990 18:06:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:10.991 18:06:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:10.991 18:06:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:10.991 18:06:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:10.991 18:06:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:10.991 18:06:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:10.991 18:06:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:11.249 18:06:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:11.249 18:06:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:11.249 18:06:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:11.249 18:06:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:11.249 18:06:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:11.249 18:06:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:11.249 18:06:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:11.249 18:06:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:11.249 18:06:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:11.249 18:06:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:11.249 18:06:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:11.507 18:06:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:11.507 18:06:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:11.507 18:06:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:11.507 18:06:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:11.507 18:06:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:11.507 18:06:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:11.507 18:06:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:11.507 18:06:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:11.507 18:06:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:11.507 18:06:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:11.507 18:06:36 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:11.507 18:06:36 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:11.507 18:06:36 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:12.074 18:06:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:13.010 [2024-12-06 18:06:38.460697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:13.268 [2024-12-06 18:06:38.585486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:13.268 [2024-12-06 18:06:38.585496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.268 [2024-12-06 18:06:38.774758] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:13.268 [2024-12-06 18:06:38.774869] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:15.171 spdk_app_start Round 1 00:08:15.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:15.171 18:06:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:15.171 18:06:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:15.171 18:06:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58353 /var/tmp/spdk-nbd.sock 00:08:15.171 18:06:40 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58353 ']' 00:08:15.171 18:06:40 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:15.171 18:06:40 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:15.171 18:06:40 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:15.171 18:06:40 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:15.171 18:06:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:15.171 18:06:40 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:15.171 18:06:40 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:15.171 18:06:40 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:15.430 Malloc0 00:08:15.689 18:06:40 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:15.949 Malloc1 00:08:15.949 18:06:41 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:15.949 18:06:41 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:15.949 18:06:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:15.949 18:06:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:15.949 18:06:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:15.949 18:06:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:15.949 18:06:41 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:15.949 18:06:41 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:15.949 18:06:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:15.949 18:06:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:15.949 18:06:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:15.949 18:06:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:15.949 18:06:41 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:15.950 18:06:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:15.950 18:06:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:15.950 18:06:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:16.217 /dev/nbd0 00:08:16.217 18:06:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:16.217 18:06:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:16.217 18:06:41 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:16.217 18:06:41 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:16.217 18:06:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:16.217 18:06:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:16.217 18:06:41 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:16.217 18:06:41 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:16.217 18:06:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:16.217 18:06:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:16.217 18:06:41 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:16.217 1+0 records in 00:08:16.217 1+0 records out 00:08:16.217 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333964 s, 12.3 MB/s 00:08:16.217 18:06:41 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:16.217 18:06:41 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:16.217 18:06:41 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:16.217 18:06:41 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:16.217 18:06:41 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:16.217 18:06:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:16.217 18:06:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:16.217 18:06:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:16.476 /dev/nbd1 00:08:16.476 18:06:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:16.476 18:06:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:16.476 18:06:41 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:16.476 18:06:41 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:16.476 18:06:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:16.476 18:06:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:16.476 18:06:41 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:16.476 18:06:41 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:16.476 18:06:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:16.476 18:06:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:16.476 18:06:41 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:16.476 1+0 records in 00:08:16.476 1+0 records out 00:08:16.476 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293145 s, 14.0 MB/s 00:08:16.476 18:06:41 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:16.476 18:06:41 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:16.476 18:06:41 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:16.476 18:06:41 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:16.476 18:06:41 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:16.476 18:06:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:16.476 18:06:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:16.476 18:06:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:16.476 18:06:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:16.476 18:06:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:17.045 18:06:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:17.045 { 00:08:17.045 "nbd_device": "/dev/nbd0", 00:08:17.045 "bdev_name": "Malloc0" 00:08:17.045 }, 00:08:17.045 { 00:08:17.045 "nbd_device": "/dev/nbd1", 00:08:17.045 "bdev_name": "Malloc1" 00:08:17.045 } 00:08:17.045 ]' 00:08:17.045 18:06:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:17.045 { 00:08:17.045 "nbd_device": "/dev/nbd0", 00:08:17.045 "bdev_name": "Malloc0" 00:08:17.045 }, 00:08:17.045 { 00:08:17.045 "nbd_device": "/dev/nbd1", 00:08:17.045 "bdev_name": "Malloc1" 00:08:17.045 } 00:08:17.045 ]' 00:08:17.045 18:06:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:17.045 18:06:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:17.045 /dev/nbd1' 00:08:17.045 18:06:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:17.045 18:06:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:17.045 /dev/nbd1' 00:08:17.045 18:06:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:17.045 18:06:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:17.045 18:06:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:17.045 18:06:42 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:17.045 18:06:42 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:17.045 18:06:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:17.045 18:06:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:17.045 18:06:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:17.045 18:06:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:17.045 18:06:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:17.045 18:06:42 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:17.045 256+0 records in 00:08:17.045 256+0 records out 00:08:17.045 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00840629 s, 125 MB/s 00:08:17.045 18:06:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:17.045 18:06:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:17.045 256+0 records in 00:08:17.045 256+0 records out 00:08:17.045 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0273284 s, 38.4 MB/s 00:08:17.045 18:06:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:17.045 18:06:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:17.045 256+0 records in 00:08:17.045 256+0 records out 00:08:17.045 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0357699 s, 29.3 MB/s 00:08:17.045 18:06:42 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:17.045 18:06:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:17.045 18:06:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:17.045 18:06:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:17.045 18:06:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:17.045 18:06:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:17.045 18:06:42 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:17.045 18:06:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:17.045 18:06:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:17.045 18:06:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:17.045 18:06:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:17.045 18:06:42 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:17.045 18:06:42 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:17.045 18:06:42 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:17.045 18:06:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:17.045 18:06:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:17.045 18:06:42 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:17.045 18:06:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:17.045 18:06:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:17.304 18:06:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:17.304 18:06:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:17.304 18:06:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:17.304 18:06:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:17.304 18:06:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:17.304 18:06:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:17.304 18:06:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:17.304 18:06:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:17.304 18:06:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:17.304 18:06:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:17.563 18:06:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:17.563 18:06:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:17.563 18:06:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:17.563 18:06:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:17.563 18:06:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:17.563 18:06:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:17.563 18:06:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:17.563 18:06:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:17.563 18:06:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:17.563 18:06:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:17.563 18:06:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:18.133 18:06:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:18.133 18:06:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:18.133 18:06:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:18.133 18:06:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:18.133 18:06:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:18.133 18:06:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:18.133 18:06:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:18.133 18:06:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:18.133 18:06:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:18.133 18:06:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:18.133 18:06:43 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:18.133 18:06:43 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:18.133 18:06:43 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:18.393 18:06:43 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:19.769 [2024-12-06 18:06:45.012103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:19.769 [2024-12-06 18:06:45.164855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.769 [2024-12-06 18:06:45.164861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.027 [2024-12-06 18:06:45.379702] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:20.027 [2024-12-06 18:06:45.379860] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:21.403 spdk_app_start Round 2 00:08:21.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:21.403 18:06:46 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:21.403 18:06:46 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:21.403 18:06:46 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58353 /var/tmp/spdk-nbd.sock 00:08:21.403 18:06:46 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58353 ']' 00:08:21.403 18:06:46 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:21.403 18:06:46 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:21.403 18:06:46 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:21.403 18:06:46 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:21.403 18:06:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:21.662 18:06:47 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:21.662 18:06:47 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:21.662 18:06:47 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:22.230 Malloc0 00:08:22.230 18:06:47 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:22.489 Malloc1 00:08:22.489 18:06:47 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:22.489 18:06:47 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:22.489 18:06:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:22.489 18:06:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:22.489 18:06:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:22.489 18:06:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:22.489 18:06:47 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:22.489 18:06:47 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:22.489 18:06:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:22.489 18:06:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:22.489 18:06:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:22.489 18:06:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:22.489 18:06:47 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:22.489 18:06:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:22.489 18:06:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:22.489 18:06:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:22.746 /dev/nbd0 00:08:22.746 18:06:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:22.746 18:06:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:22.746 18:06:48 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:22.746 18:06:48 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:22.746 18:06:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:22.746 18:06:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:22.746 18:06:48 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:22.746 18:06:48 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:22.746 18:06:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:22.746 18:06:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:22.746 18:06:48 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:22.746 1+0 records in 00:08:22.746 1+0 records out 00:08:22.746 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000320842 s, 12.8 MB/s 00:08:22.746 18:06:48 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:22.746 18:06:48 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:22.746 18:06:48 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:22.746 18:06:48 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:22.746 18:06:48 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:22.746 18:06:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:22.746 18:06:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:22.746 18:06:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:23.006 /dev/nbd1 00:08:23.006 18:06:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:23.006 18:06:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:23.006 18:06:48 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:23.006 18:06:48 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:23.006 18:06:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:23.006 18:06:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:23.006 18:06:48 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:23.006 18:06:48 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:23.006 18:06:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:23.006 18:06:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:23.006 18:06:48 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:23.265 1+0 records in 00:08:23.265 1+0 records out 00:08:23.265 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000334449 s, 12.2 MB/s 00:08:23.265 18:06:48 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:23.265 18:06:48 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:23.265 18:06:48 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:23.265 18:06:48 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:23.265 18:06:48 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:23.265 18:06:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:23.265 18:06:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:23.265 18:06:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:23.265 18:06:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:23.265 18:06:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:23.524 18:06:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:23.524 { 00:08:23.524 "nbd_device": "/dev/nbd0", 00:08:23.524 "bdev_name": "Malloc0" 00:08:23.524 }, 00:08:23.524 { 00:08:23.524 "nbd_device": "/dev/nbd1", 00:08:23.524 "bdev_name": "Malloc1" 00:08:23.524 } 00:08:23.524 ]' 00:08:23.524 18:06:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:23.524 { 00:08:23.524 "nbd_device": "/dev/nbd0", 00:08:23.524 "bdev_name": "Malloc0" 00:08:23.524 }, 00:08:23.524 { 00:08:23.524 "nbd_device": "/dev/nbd1", 00:08:23.524 "bdev_name": "Malloc1" 00:08:23.524 } 00:08:23.524 ]' 00:08:23.524 18:06:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:23.524 18:06:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:23.524 /dev/nbd1' 00:08:23.524 18:06:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:23.524 /dev/nbd1' 00:08:23.524 18:06:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:23.524 18:06:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:23.524 18:06:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:23.524 18:06:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:23.524 18:06:48 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:23.524 18:06:48 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:23.524 18:06:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:23.524 18:06:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:23.524 18:06:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:23.524 18:06:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:23.524 18:06:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:23.524 18:06:48 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:23.524 256+0 records in 00:08:23.524 256+0 records out 00:08:23.524 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106253 s, 98.7 MB/s 00:08:23.524 18:06:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:23.524 18:06:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:23.524 256+0 records in 00:08:23.524 256+0 records out 00:08:23.524 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0262298 s, 40.0 MB/s 00:08:23.524 18:06:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:23.524 18:06:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:23.524 256+0 records in 00:08:23.524 256+0 records out 00:08:23.524 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0337507 s, 31.1 MB/s 00:08:23.524 18:06:48 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:23.524 18:06:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:23.524 18:06:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:23.524 18:06:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:23.524 18:06:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:23.524 18:06:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:23.524 18:06:48 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:23.524 18:06:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:23.524 18:06:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:23.524 18:06:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:23.524 18:06:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:23.524 18:06:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:23.524 18:06:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:23.524 18:06:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:23.524 18:06:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:23.524 18:06:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:23.524 18:06:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:23.524 18:06:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:23.524 18:06:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:24.091 18:06:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:24.091 18:06:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:24.091 18:06:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:24.091 18:06:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:24.091 18:06:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:24.091 18:06:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:24.091 18:06:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:24.091 18:06:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:24.091 18:06:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:24.091 18:06:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:24.349 18:06:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:24.349 18:06:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:24.349 18:06:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:24.349 18:06:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:24.349 18:06:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:24.349 18:06:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:24.349 18:06:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:24.349 18:06:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:24.349 18:06:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:24.349 18:06:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:24.349 18:06:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:24.607 18:06:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:24.607 18:06:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:24.607 18:06:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:24.607 18:06:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:24.607 18:06:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:24.607 18:06:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:24.607 18:06:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:24.607 18:06:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:24.607 18:06:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:24.607 18:06:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:24.607 18:06:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:24.607 18:06:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:24.607 18:06:50 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:25.173 18:06:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:26.108 [2024-12-06 18:06:51.550675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:26.367 [2024-12-06 18:06:51.676813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.367 [2024-12-06 18:06:51.676820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.367 [2024-12-06 18:06:51.868271] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:26.367 [2024-12-06 18:06:51.868369] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:28.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:28.275 18:06:53 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58353 /var/tmp/spdk-nbd.sock 00:08:28.276 18:06:53 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58353 ']' 00:08:28.276 18:06:53 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:28.276 18:06:53 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:28.276 18:06:53 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:28.276 18:06:53 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:28.276 18:06:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:28.533 18:06:53 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:28.534 18:06:53 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:28.534 18:06:53 event.app_repeat -- event/event.sh@39 -- # killprocess 58353 00:08:28.534 18:06:53 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58353 ']' 00:08:28.534 18:06:53 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58353 00:08:28.534 18:06:53 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:08:28.534 18:06:53 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:28.534 18:06:53 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58353 00:08:28.534 killing process with pid 58353 00:08:28.534 18:06:53 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:28.534 18:06:53 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:28.534 18:06:53 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58353' 00:08:28.534 18:06:53 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58353 00:08:28.534 18:06:53 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58353 00:08:29.470 spdk_app_start is called in Round 0. 00:08:29.470 Shutdown signal received, stop current app iteration 00:08:29.470 Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 reinitialization... 00:08:29.470 spdk_app_start is called in Round 1. 00:08:29.470 Shutdown signal received, stop current app iteration 00:08:29.470 Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 reinitialization... 00:08:29.470 spdk_app_start is called in Round 2. 00:08:29.470 Shutdown signal received, stop current app iteration 00:08:29.470 Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 reinitialization... 00:08:29.470 spdk_app_start is called in Round 3. 00:08:29.470 Shutdown signal received, stop current app iteration 00:08:29.470 18:06:54 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:29.470 18:06:54 event.app_repeat -- event/event.sh@42 -- # return 0 00:08:29.470 00:08:29.470 real 0m21.597s 00:08:29.470 user 0m47.721s 00:08:29.470 sys 0m3.099s 00:08:29.470 ************************************ 00:08:29.470 END TEST app_repeat 00:08:29.470 ************************************ 00:08:29.470 18:06:54 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.470 18:06:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:29.470 18:06:54 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:29.470 18:06:54 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:29.470 18:06:54 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:29.470 18:06:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.470 18:06:54 event -- common/autotest_common.sh@10 -- # set +x 00:08:29.470 ************************************ 00:08:29.470 START TEST cpu_locks 00:08:29.470 ************************************ 00:08:29.470 18:06:54 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:29.470 * Looking for test storage... 00:08:29.470 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:29.470 18:06:54 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:29.470 18:06:54 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:29.470 18:06:54 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:08:29.730 18:06:55 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:29.730 18:06:55 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:29.730 18:06:55 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:29.730 18:06:55 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:29.730 18:06:55 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:08:29.730 18:06:55 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:08:29.730 18:06:55 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:08:29.730 18:06:55 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:08:29.730 18:06:55 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:08:29.730 18:06:55 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:08:29.730 18:06:55 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:08:29.730 18:06:55 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:29.730 18:06:55 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:08:29.730 18:06:55 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:08:29.730 18:06:55 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:29.730 18:06:55 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:29.730 18:06:55 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:08:29.730 18:06:55 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:08:29.730 18:06:55 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:29.730 18:06:55 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:08:29.730 18:06:55 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:08:29.730 18:06:55 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:08:29.730 18:06:55 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:08:29.730 18:06:55 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:29.730 18:06:55 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:08:29.730 18:06:55 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:08:29.730 18:06:55 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:29.730 18:06:55 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:29.730 18:06:55 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:08:29.730 18:06:55 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:29.731 18:06:55 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:29.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.731 --rc genhtml_branch_coverage=1 00:08:29.731 --rc genhtml_function_coverage=1 00:08:29.731 --rc genhtml_legend=1 00:08:29.731 --rc geninfo_all_blocks=1 00:08:29.731 --rc geninfo_unexecuted_blocks=1 00:08:29.731 00:08:29.731 ' 00:08:29.731 18:06:55 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:29.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.731 --rc genhtml_branch_coverage=1 00:08:29.731 --rc genhtml_function_coverage=1 00:08:29.731 --rc genhtml_legend=1 00:08:29.731 --rc geninfo_all_blocks=1 00:08:29.731 --rc geninfo_unexecuted_blocks=1 00:08:29.731 00:08:29.731 ' 00:08:29.731 18:06:55 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:29.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.731 --rc genhtml_branch_coverage=1 00:08:29.731 --rc genhtml_function_coverage=1 00:08:29.731 --rc genhtml_legend=1 00:08:29.731 --rc geninfo_all_blocks=1 00:08:29.731 --rc geninfo_unexecuted_blocks=1 00:08:29.731 00:08:29.731 ' 00:08:29.731 18:06:55 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:29.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.731 --rc genhtml_branch_coverage=1 00:08:29.731 --rc genhtml_function_coverage=1 00:08:29.731 --rc genhtml_legend=1 00:08:29.731 --rc geninfo_all_blocks=1 00:08:29.731 --rc geninfo_unexecuted_blocks=1 00:08:29.731 00:08:29.731 ' 00:08:29.731 18:06:55 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:29.731 18:06:55 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:29.731 18:06:55 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:29.731 18:06:55 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:29.731 18:06:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:29.731 18:06:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.731 18:06:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:29.731 ************************************ 00:08:29.731 START TEST default_locks 00:08:29.731 ************************************ 00:08:29.731 18:06:55 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:08:29.731 18:06:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58824 00:08:29.731 18:06:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58824 00:08:29.731 18:06:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:29.731 18:06:55 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58824 ']' 00:08:29.731 18:06:55 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.731 18:06:55 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:29.731 18:06:55 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.731 18:06:55 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:29.731 18:06:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:29.731 [2024-12-06 18:06:55.168306] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:08:29.731 [2024-12-06 18:06:55.169088] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58824 ] 00:08:29.988 [2024-12-06 18:06:55.351118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.246 [2024-12-06 18:06:55.509921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.185 18:06:56 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:31.185 18:06:56 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:08:31.185 18:06:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58824 00:08:31.185 18:06:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58824 00:08:31.185 18:06:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:31.445 18:06:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58824 00:08:31.445 18:06:56 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58824 ']' 00:08:31.445 18:06:56 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58824 00:08:31.445 18:06:56 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:08:31.445 18:06:56 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:31.445 18:06:56 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58824 00:08:31.445 18:06:56 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:31.445 18:06:56 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:31.445 killing process with pid 58824 00:08:31.445 18:06:56 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58824' 00:08:31.445 18:06:56 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58824 00:08:31.445 18:06:56 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58824 00:08:33.978 18:06:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58824 00:08:33.978 18:06:59 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:08:33.978 18:06:59 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58824 00:08:33.978 18:06:59 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:33.978 18:06:59 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.978 18:06:59 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:33.978 18:06:59 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.978 18:06:59 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58824 00:08:33.978 18:06:59 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58824 ']' 00:08:33.978 18:06:59 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.978 18:06:59 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:33.978 18:06:59 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.978 18:06:59 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:33.978 18:06:59 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:33.978 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58824) - No such process 00:08:33.978 ERROR: process (pid: 58824) is no longer running 00:08:33.978 18:06:59 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:33.978 18:06:59 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:08:33.978 18:06:59 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:08:33.978 18:06:59 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:33.978 18:06:59 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:33.978 18:06:59 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:33.978 18:06:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:08:33.978 18:06:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:33.978 18:06:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:08:33.978 18:06:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:33.978 00:08:33.978 real 0m4.022s 00:08:33.978 user 0m4.092s 00:08:33.978 sys 0m0.779s 00:08:33.978 18:06:59 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.978 18:06:59 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:33.978 ************************************ 00:08:33.978 END TEST default_locks 00:08:33.978 ************************************ 00:08:33.978 18:06:59 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:33.978 18:06:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:33.978 18:06:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.978 18:06:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:33.978 ************************************ 00:08:33.978 START TEST default_locks_via_rpc 00:08:33.978 ************************************ 00:08:33.978 18:06:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:08:33.978 18:06:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58899 00:08:33.978 18:06:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58899 00:08:33.978 18:06:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:33.978 18:06:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58899 ']' 00:08:33.978 18:06:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.978 18:06:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:33.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.978 18:06:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.978 18:06:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:33.978 18:06:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.978 [2024-12-06 18:06:59.270854] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:08:33.978 [2024-12-06 18:06:59.271681] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58899 ] 00:08:33.978 [2024-12-06 18:06:59.455450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.237 [2024-12-06 18:06:59.584876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.174 18:07:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:35.174 18:07:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:35.174 18:07:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:35.174 18:07:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.174 18:07:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:35.174 18:07:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.174 18:07:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:08:35.174 18:07:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:35.174 18:07:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:08:35.174 18:07:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:35.174 18:07:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:35.174 18:07:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.174 18:07:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:35.174 18:07:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.174 18:07:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58899 00:08:35.174 18:07:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58899 00:08:35.174 18:07:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:35.433 18:07:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58899 00:08:35.433 18:07:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58899 ']' 00:08:35.433 18:07:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58899 00:08:35.433 18:07:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:08:35.433 18:07:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:35.433 18:07:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58899 00:08:35.433 18:07:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:35.433 18:07:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:35.433 killing process with pid 58899 00:08:35.433 18:07:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58899' 00:08:35.433 18:07:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58899 00:08:35.433 18:07:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58899 00:08:37.964 00:08:37.964 real 0m3.988s 00:08:37.964 user 0m4.043s 00:08:37.964 sys 0m0.702s 00:08:37.964 18:07:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.964 18:07:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.964 ************************************ 00:08:37.964 END TEST default_locks_via_rpc 00:08:37.964 ************************************ 00:08:37.964 18:07:03 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:37.964 18:07:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:37.964 18:07:03 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.964 18:07:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:37.964 ************************************ 00:08:37.964 START TEST non_locking_app_on_locked_coremask 00:08:37.964 ************************************ 00:08:37.964 18:07:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:08:37.964 18:07:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58975 00:08:37.964 18:07:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58975 /var/tmp/spdk.sock 00:08:37.964 18:07:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:37.964 18:07:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58975 ']' 00:08:37.964 18:07:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.964 18:07:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:37.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.964 18:07:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.964 18:07:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:37.964 18:07:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:37.964 [2024-12-06 18:07:03.305647] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:08:37.964 [2024-12-06 18:07:03.305857] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58975 ] 00:08:38.222 [2024-12-06 18:07:03.494292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.222 [2024-12-06 18:07:03.647109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.172 18:07:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:39.172 18:07:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:39.172 18:07:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58997 00:08:39.172 18:07:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:39.172 18:07:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58997 /var/tmp/spdk2.sock 00:08:39.172 18:07:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58997 ']' 00:08:39.172 18:07:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:39.172 18:07:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:39.172 18:07:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:39.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:39.172 18:07:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:39.172 18:07:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:39.172 [2024-12-06 18:07:04.654185] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:08:39.172 [2024-12-06 18:07:04.654356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58997 ] 00:08:39.431 [2024-12-06 18:07:04.857680] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:39.431 [2024-12-06 18:07:04.857803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.691 [2024-12-06 18:07:05.124689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.229 18:07:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:42.229 18:07:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:42.229 18:07:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58975 00:08:42.229 18:07:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58975 00:08:42.229 18:07:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:43.165 18:07:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58975 00:08:43.165 18:07:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58975 ']' 00:08:43.165 18:07:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58975 00:08:43.165 18:07:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:43.165 18:07:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:43.165 18:07:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58975 00:08:43.165 18:07:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:43.165 18:07:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:43.165 killing process with pid 58975 00:08:43.165 18:07:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58975' 00:08:43.165 18:07:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58975 00:08:43.165 18:07:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58975 00:08:47.357 18:07:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58997 00:08:47.357 18:07:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58997 ']' 00:08:47.357 18:07:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58997 00:08:47.357 18:07:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:47.357 18:07:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:47.357 18:07:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58997 00:08:47.357 18:07:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:47.357 18:07:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:47.357 killing process with pid 58997 00:08:47.357 18:07:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58997' 00:08:47.357 18:07:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58997 00:08:47.357 18:07:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58997 00:08:49.936 00:08:49.936 real 0m11.866s 00:08:49.936 user 0m12.448s 00:08:49.936 sys 0m1.576s 00:08:49.936 18:07:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.936 18:07:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:49.936 ************************************ 00:08:49.936 END TEST non_locking_app_on_locked_coremask 00:08:49.936 ************************************ 00:08:49.936 18:07:15 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:49.936 18:07:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:49.936 18:07:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:49.936 18:07:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:49.936 ************************************ 00:08:49.936 START TEST locking_app_on_unlocked_coremask 00:08:49.936 ************************************ 00:08:49.936 18:07:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:08:49.936 18:07:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59150 00:08:49.936 18:07:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:49.936 18:07:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59150 /var/tmp/spdk.sock 00:08:49.936 18:07:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59150 ']' 00:08:49.936 18:07:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.936 18:07:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:49.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.936 18:07:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.936 18:07:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:49.936 18:07:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:49.936 [2024-12-06 18:07:15.224355] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:08:49.936 [2024-12-06 18:07:15.224577] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59150 ] 00:08:49.936 [2024-12-06 18:07:15.410133] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:49.937 [2024-12-06 18:07:15.410211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.195 [2024-12-06 18:07:15.537551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.130 18:07:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:51.130 18:07:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:51.130 18:07:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59167 00:08:51.130 18:07:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:51.130 18:07:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59167 /var/tmp/spdk2.sock 00:08:51.130 18:07:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59167 ']' 00:08:51.130 18:07:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:51.130 18:07:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:51.130 18:07:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:51.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:51.130 18:07:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:51.130 18:07:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:51.130 [2024-12-06 18:07:16.531622] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:08:51.130 [2024-12-06 18:07:16.531821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59167 ] 00:08:51.389 [2024-12-06 18:07:16.737312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.648 [2024-12-06 18:07:16.997362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.183 18:07:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:54.183 18:07:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:54.183 18:07:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59167 00:08:54.183 18:07:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59167 00:08:54.183 18:07:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:54.751 18:07:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59150 00:08:54.751 18:07:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59150 ']' 00:08:54.751 18:07:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59150 00:08:54.751 18:07:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:54.751 18:07:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:54.751 18:07:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59150 00:08:54.751 18:07:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:54.751 killing process with pid 59150 00:08:54.751 18:07:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:54.751 18:07:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59150' 00:08:54.751 18:07:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59150 00:08:54.751 18:07:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59150 00:08:58.969 18:07:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59167 00:08:58.969 18:07:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59167 ']' 00:08:58.969 18:07:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59167 00:08:58.969 18:07:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:58.969 18:07:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:58.969 18:07:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59167 00:08:59.228 18:07:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:59.228 18:07:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:59.228 killing process with pid 59167 00:08:59.228 18:07:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59167' 00:08:59.228 18:07:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59167 00:08:59.228 18:07:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59167 00:09:01.761 00:09:01.761 real 0m11.606s 00:09:01.761 user 0m12.203s 00:09:01.761 sys 0m1.556s 00:09:01.761 18:07:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.761 18:07:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:01.761 ************************************ 00:09:01.761 END TEST locking_app_on_unlocked_coremask 00:09:01.761 ************************************ 00:09:01.761 18:07:26 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:01.761 18:07:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:01.761 18:07:26 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:01.761 18:07:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:01.761 ************************************ 00:09:01.761 START TEST locking_app_on_locked_coremask 00:09:01.761 ************************************ 00:09:01.761 18:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:09:01.761 18:07:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59316 00:09:01.761 18:07:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59316 /var/tmp/spdk.sock 00:09:01.761 18:07:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:01.761 18:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59316 ']' 00:09:01.761 18:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.761 18:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:01.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.761 18:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.761 18:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:01.761 18:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:01.761 [2024-12-06 18:07:26.872707] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:09:01.761 [2024-12-06 18:07:26.873462] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59316 ] 00:09:01.761 [2024-12-06 18:07:27.048928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.761 [2024-12-06 18:07:27.177277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.696 18:07:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:02.696 18:07:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:02.696 18:07:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59338 00:09:02.696 18:07:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:02.696 18:07:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59338 /var/tmp/spdk2.sock 00:09:02.696 18:07:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:02.696 18:07:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59338 /var/tmp/spdk2.sock 00:09:02.696 18:07:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:02.696 18:07:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:02.696 18:07:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:02.696 18:07:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:02.696 18:07:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59338 /var/tmp/spdk2.sock 00:09:02.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:02.696 18:07:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59338 ']' 00:09:02.696 18:07:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:02.696 18:07:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:02.696 18:07:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:02.696 18:07:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:02.696 18:07:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:02.696 [2024-12-06 18:07:28.159246] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:09:02.696 [2024-12-06 18:07:28.159436] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59338 ] 00:09:02.954 [2024-12-06 18:07:28.362364] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59316 has claimed it. 00:09:02.954 [2024-12-06 18:07:28.362473] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:03.523 ERROR: process (pid: 59338) is no longer running 00:09:03.523 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59338) - No such process 00:09:03.523 18:07:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:03.523 18:07:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:03.523 18:07:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:03.523 18:07:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:03.523 18:07:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:03.523 18:07:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:03.523 18:07:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59316 00:09:03.523 18:07:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59316 00:09:03.523 18:07:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:03.781 18:07:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59316 00:09:03.781 18:07:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59316 ']' 00:09:03.781 18:07:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59316 00:09:03.781 18:07:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:03.781 18:07:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:03.781 18:07:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59316 00:09:03.781 killing process with pid 59316 00:09:03.781 18:07:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:03.781 18:07:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:03.781 18:07:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59316' 00:09:03.781 18:07:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59316 00:09:03.781 18:07:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59316 00:09:06.315 00:09:06.315 real 0m4.697s 00:09:06.315 user 0m5.008s 00:09:06.315 sys 0m0.888s 00:09:06.315 18:07:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.315 18:07:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:06.315 ************************************ 00:09:06.315 END TEST locking_app_on_locked_coremask 00:09:06.315 ************************************ 00:09:06.315 18:07:31 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:06.315 18:07:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:06.315 18:07:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.315 18:07:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:06.315 ************************************ 00:09:06.315 START TEST locking_overlapped_coremask 00:09:06.315 ************************************ 00:09:06.315 18:07:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:09:06.315 18:07:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59402 00:09:06.315 18:07:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59402 /var/tmp/spdk.sock 00:09:06.315 18:07:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:09:06.315 18:07:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59402 ']' 00:09:06.315 18:07:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.315 18:07:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:06.315 18:07:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.315 18:07:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:06.315 18:07:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:06.315 [2024-12-06 18:07:31.625649] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:09:06.315 [2024-12-06 18:07:31.626095] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59402 ] 00:09:06.315 [2024-12-06 18:07:31.812283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:06.574 [2024-12-06 18:07:31.947846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:06.574 [2024-12-06 18:07:31.947942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.574 [2024-12-06 18:07:31.947966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:07.509 18:07:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:07.509 18:07:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:07.509 18:07:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:07.509 18:07:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59420 00:09:07.509 18:07:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59420 /var/tmp/spdk2.sock 00:09:07.510 18:07:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:07.510 18:07:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59420 /var/tmp/spdk2.sock 00:09:07.510 18:07:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:07.510 18:07:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:07.510 18:07:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:07.510 18:07:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:07.510 18:07:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59420 /var/tmp/spdk2.sock 00:09:07.510 18:07:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59420 ']' 00:09:07.510 18:07:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:07.510 18:07:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:07.510 18:07:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:07.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:07.510 18:07:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:07.510 18:07:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:07.510 [2024-12-06 18:07:32.909594] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:09:07.510 [2024-12-06 18:07:32.909919] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59420 ] 00:09:07.770 [2024-12-06 18:07:33.107386] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59402 has claimed it. 00:09:07.770 [2024-12-06 18:07:33.107480] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:08.338 ERROR: process (pid: 59420) is no longer running 00:09:08.338 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59420) - No such process 00:09:08.338 18:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:08.338 18:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:08.338 18:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:08.338 18:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:08.338 18:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:08.338 18:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:08.338 18:07:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:08.338 18:07:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:08.338 18:07:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:08.338 18:07:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:08.338 18:07:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59402 00:09:08.338 18:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59402 ']' 00:09:08.338 18:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59402 00:09:08.338 18:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:09:08.338 18:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:08.338 18:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59402 00:09:08.338 18:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:08.338 18:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:08.338 18:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59402' 00:09:08.338 killing process with pid 59402 00:09:08.338 18:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59402 00:09:08.338 18:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59402 00:09:10.872 00:09:10.872 real 0m4.443s 00:09:10.872 user 0m12.156s 00:09:10.872 sys 0m0.649s 00:09:10.872 18:07:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:10.872 18:07:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:10.872 ************************************ 00:09:10.872 END TEST locking_overlapped_coremask 00:09:10.872 ************************************ 00:09:10.872 18:07:35 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:10.872 18:07:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:10.872 18:07:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:10.872 18:07:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:10.872 ************************************ 00:09:10.872 START TEST locking_overlapped_coremask_via_rpc 00:09:10.872 ************************************ 00:09:10.872 18:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:09:10.872 18:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59484 00:09:10.872 18:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59484 /var/tmp/spdk.sock 00:09:10.872 18:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:10.872 18:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59484 ']' 00:09:10.872 18:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.872 18:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:10.872 18:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.872 18:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:10.872 18:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.872 [2024-12-06 18:07:36.125492] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:09:10.872 [2024-12-06 18:07:36.125982] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59484 ] 00:09:10.872 [2024-12-06 18:07:36.310822] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:10.872 [2024-12-06 18:07:36.311192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:11.131 [2024-12-06 18:07:36.446518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:11.131 [2024-12-06 18:07:36.446653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.131 [2024-12-06 18:07:36.446665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:12.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:12.127 18:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:12.127 18:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:12.127 18:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59513 00:09:12.127 18:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:12.127 18:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59513 /var/tmp/spdk2.sock 00:09:12.127 18:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59513 ']' 00:09:12.127 18:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:12.127 18:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:12.127 18:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:12.127 18:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:12.127 18:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:12.127 [2024-12-06 18:07:37.437345] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:09:12.127 [2024-12-06 18:07:37.438062] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59513 ] 00:09:12.127 [2024-12-06 18:07:37.645604] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:12.127 [2024-12-06 18:07:37.645686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:12.694 [2024-12-06 18:07:37.919246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:12.694 [2024-12-06 18:07:37.922878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:12.694 [2024-12-06 18:07:37.922897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:15.222 18:07:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:15.222 18:07:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:15.222 18:07:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:15.222 18:07:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.222 18:07:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.222 18:07:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.222 18:07:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:15.222 18:07:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:09:15.222 18:07:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:15.222 18:07:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:15.222 18:07:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:15.222 18:07:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:15.222 18:07:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:15.222 18:07:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:15.222 18:07:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.222 18:07:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.222 [2024-12-06 18:07:40.193990] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59484 has claimed it. 00:09:15.222 request: 00:09:15.222 { 00:09:15.222 "method": "framework_enable_cpumask_locks", 00:09:15.222 "req_id": 1 00:09:15.222 } 00:09:15.222 Got JSON-RPC error response 00:09:15.222 response: 00:09:15.222 { 00:09:15.222 "code": -32603, 00:09:15.222 "message": "Failed to claim CPU core: 2" 00:09:15.222 } 00:09:15.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.222 18:07:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:15.222 18:07:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:09:15.222 18:07:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:15.222 18:07:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:15.222 18:07:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:15.222 18:07:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59484 /var/tmp/spdk.sock 00:09:15.222 18:07:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59484 ']' 00:09:15.223 18:07:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.223 18:07:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:15.223 18:07:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.223 18:07:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:15.223 18:07:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.223 18:07:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:15.223 18:07:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:15.223 18:07:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59513 /var/tmp/spdk2.sock 00:09:15.223 18:07:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59513 ']' 00:09:15.223 18:07:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:15.223 18:07:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:15.223 18:07:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:15.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:15.223 18:07:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:15.223 18:07:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.481 18:07:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:15.481 18:07:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:15.481 18:07:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:15.481 ************************************ 00:09:15.481 END TEST locking_overlapped_coremask_via_rpc 00:09:15.481 ************************************ 00:09:15.481 18:07:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:15.481 18:07:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:15.481 18:07:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:15.481 00:09:15.481 real 0m4.885s 00:09:15.481 user 0m1.837s 00:09:15.481 sys 0m0.228s 00:09:15.481 18:07:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:15.481 18:07:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.481 18:07:40 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:09:15.481 18:07:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59484 ]] 00:09:15.481 18:07:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59484 00:09:15.481 18:07:40 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59484 ']' 00:09:15.481 18:07:40 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59484 00:09:15.481 18:07:40 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:09:15.481 18:07:40 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:15.481 18:07:40 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59484 00:09:15.481 killing process with pid 59484 00:09:15.481 18:07:40 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:15.481 18:07:40 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:15.481 18:07:40 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59484' 00:09:15.481 18:07:40 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59484 00:09:15.481 18:07:40 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59484 00:09:18.065 18:07:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59513 ]] 00:09:18.065 18:07:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59513 00:09:18.065 18:07:43 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59513 ']' 00:09:18.065 18:07:43 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59513 00:09:18.065 18:07:43 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:09:18.065 18:07:43 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:18.066 18:07:43 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59513 00:09:18.066 killing process with pid 59513 00:09:18.066 18:07:43 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:09:18.066 18:07:43 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:09:18.066 18:07:43 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59513' 00:09:18.066 18:07:43 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59513 00:09:18.066 18:07:43 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59513 00:09:20.601 18:07:45 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:20.601 Process with pid 59484 is not found 00:09:20.601 Process with pid 59513 is not found 00:09:20.601 18:07:45 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:09:20.601 18:07:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59484 ]] 00:09:20.601 18:07:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59484 00:09:20.601 18:07:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59484 ']' 00:09:20.601 18:07:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59484 00:09:20.601 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59484) - No such process 00:09:20.601 18:07:45 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59484 is not found' 00:09:20.601 18:07:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59513 ]] 00:09:20.601 18:07:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59513 00:09:20.601 18:07:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59513 ']' 00:09:20.601 18:07:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59513 00:09:20.601 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59513) - No such process 00:09:20.601 18:07:45 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59513 is not found' 00:09:20.601 18:07:45 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:20.601 ************************************ 00:09:20.601 END TEST cpu_locks 00:09:20.601 ************************************ 00:09:20.601 00:09:20.601 real 0m50.719s 00:09:20.601 user 1m28.693s 00:09:20.601 sys 0m7.640s 00:09:20.601 18:07:45 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.601 18:07:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:20.601 ************************************ 00:09:20.601 END TEST event 00:09:20.601 ************************************ 00:09:20.601 00:09:20.601 real 1m23.047s 00:09:20.601 user 2m33.479s 00:09:20.601 sys 0m11.830s 00:09:20.601 18:07:45 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.601 18:07:45 event -- common/autotest_common.sh@10 -- # set +x 00:09:20.601 18:07:45 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:20.601 18:07:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:20.601 18:07:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.601 18:07:45 -- common/autotest_common.sh@10 -- # set +x 00:09:20.601 ************************************ 00:09:20.601 START TEST thread 00:09:20.601 ************************************ 00:09:20.601 18:07:45 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:20.601 * Looking for test storage... 00:09:20.601 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:09:20.601 18:07:45 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:20.601 18:07:45 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:09:20.601 18:07:45 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:20.601 18:07:45 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:20.601 18:07:45 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:20.601 18:07:45 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:20.601 18:07:45 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:20.601 18:07:45 thread -- scripts/common.sh@336 -- # IFS=.-: 00:09:20.601 18:07:45 thread -- scripts/common.sh@336 -- # read -ra ver1 00:09:20.601 18:07:45 thread -- scripts/common.sh@337 -- # IFS=.-: 00:09:20.601 18:07:45 thread -- scripts/common.sh@337 -- # read -ra ver2 00:09:20.601 18:07:45 thread -- scripts/common.sh@338 -- # local 'op=<' 00:09:20.601 18:07:45 thread -- scripts/common.sh@340 -- # ver1_l=2 00:09:20.601 18:07:45 thread -- scripts/common.sh@341 -- # ver2_l=1 00:09:20.601 18:07:45 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:20.601 18:07:45 thread -- scripts/common.sh@344 -- # case "$op" in 00:09:20.601 18:07:45 thread -- scripts/common.sh@345 -- # : 1 00:09:20.601 18:07:45 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:20.601 18:07:45 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:20.601 18:07:45 thread -- scripts/common.sh@365 -- # decimal 1 00:09:20.601 18:07:45 thread -- scripts/common.sh@353 -- # local d=1 00:09:20.601 18:07:45 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:20.601 18:07:45 thread -- scripts/common.sh@355 -- # echo 1 00:09:20.601 18:07:45 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:09:20.601 18:07:45 thread -- scripts/common.sh@366 -- # decimal 2 00:09:20.601 18:07:45 thread -- scripts/common.sh@353 -- # local d=2 00:09:20.601 18:07:45 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:20.601 18:07:45 thread -- scripts/common.sh@355 -- # echo 2 00:09:20.601 18:07:45 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:09:20.601 18:07:45 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:20.601 18:07:45 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:20.601 18:07:45 thread -- scripts/common.sh@368 -- # return 0 00:09:20.601 18:07:45 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:20.601 18:07:45 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:20.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.601 --rc genhtml_branch_coverage=1 00:09:20.601 --rc genhtml_function_coverage=1 00:09:20.601 --rc genhtml_legend=1 00:09:20.601 --rc geninfo_all_blocks=1 00:09:20.601 --rc geninfo_unexecuted_blocks=1 00:09:20.601 00:09:20.601 ' 00:09:20.601 18:07:45 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:20.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.601 --rc genhtml_branch_coverage=1 00:09:20.601 --rc genhtml_function_coverage=1 00:09:20.601 --rc genhtml_legend=1 00:09:20.601 --rc geninfo_all_blocks=1 00:09:20.601 --rc geninfo_unexecuted_blocks=1 00:09:20.601 00:09:20.601 ' 00:09:20.601 18:07:45 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:20.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.601 --rc genhtml_branch_coverage=1 00:09:20.601 --rc genhtml_function_coverage=1 00:09:20.601 --rc genhtml_legend=1 00:09:20.602 --rc geninfo_all_blocks=1 00:09:20.602 --rc geninfo_unexecuted_blocks=1 00:09:20.602 00:09:20.602 ' 00:09:20.602 18:07:45 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:20.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.602 --rc genhtml_branch_coverage=1 00:09:20.602 --rc genhtml_function_coverage=1 00:09:20.602 --rc genhtml_legend=1 00:09:20.602 --rc geninfo_all_blocks=1 00:09:20.602 --rc geninfo_unexecuted_blocks=1 00:09:20.602 00:09:20.602 ' 00:09:20.602 18:07:45 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:20.602 18:07:45 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:09:20.602 18:07:45 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.602 18:07:45 thread -- common/autotest_common.sh@10 -- # set +x 00:09:20.602 ************************************ 00:09:20.602 START TEST thread_poller_perf 00:09:20.602 ************************************ 00:09:20.602 18:07:45 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:20.602 [2024-12-06 18:07:45.914653] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:09:20.602 [2024-12-06 18:07:45.914836] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59708 ] 00:09:20.602 [2024-12-06 18:07:46.106509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.861 [2024-12-06 18:07:46.259087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.861 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:22.240 [2024-12-06T18:07:47.760Z] ====================================== 00:09:22.240 [2024-12-06T18:07:47.760Z] busy:2213944159 (cyc) 00:09:22.240 [2024-12-06T18:07:47.760Z] total_run_count: 306000 00:09:22.240 [2024-12-06T18:07:47.760Z] tsc_hz: 2200000000 (cyc) 00:09:22.240 [2024-12-06T18:07:47.760Z] ====================================== 00:09:22.240 [2024-12-06T18:07:47.760Z] poller_cost: 7235 (cyc), 3288 (nsec) 00:09:22.240 00:09:22.240 ************************************ 00:09:22.240 END TEST thread_poller_perf 00:09:22.240 ************************************ 00:09:22.240 real 0m1.637s 00:09:22.240 user 0m1.415s 00:09:22.240 sys 0m0.111s 00:09:22.240 18:07:47 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:22.240 18:07:47 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:22.240 18:07:47 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:22.240 18:07:47 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:09:22.240 18:07:47 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:22.240 18:07:47 thread -- common/autotest_common.sh@10 -- # set +x 00:09:22.240 ************************************ 00:09:22.240 START TEST thread_poller_perf 00:09:22.240 ************************************ 00:09:22.240 18:07:47 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:22.240 [2024-12-06 18:07:47.605108] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:09:22.240 [2024-12-06 18:07:47.605288] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59745 ] 00:09:22.511 [2024-12-06 18:07:47.789674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.511 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:22.511 [2024-12-06 18:07:47.915579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.890 [2024-12-06T18:07:49.410Z] ====================================== 00:09:23.890 [2024-12-06T18:07:49.410Z] busy:2203735831 (cyc) 00:09:23.890 [2024-12-06T18:07:49.410Z] total_run_count: 3580000 00:09:23.890 [2024-12-06T18:07:49.410Z] tsc_hz: 2200000000 (cyc) 00:09:23.890 [2024-12-06T18:07:49.410Z] ====================================== 00:09:23.890 [2024-12-06T18:07:49.410Z] poller_cost: 615 (cyc), 279 (nsec) 00:09:23.890 00:09:23.890 real 0m1.587s 00:09:23.890 user 0m1.377s 00:09:23.890 sys 0m0.101s 00:09:23.890 18:07:49 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:23.890 ************************************ 00:09:23.890 END TEST thread_poller_perf 00:09:23.890 ************************************ 00:09:23.890 18:07:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:23.890 18:07:49 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:09:23.890 00:09:23.890 real 0m3.499s 00:09:23.890 user 0m2.937s 00:09:23.890 sys 0m0.336s 00:09:23.890 18:07:49 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:23.890 18:07:49 thread -- common/autotest_common.sh@10 -- # set +x 00:09:23.890 ************************************ 00:09:23.890 END TEST thread 00:09:23.890 ************************************ 00:09:23.890 18:07:49 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:09:23.890 18:07:49 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:23.890 18:07:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:23.890 18:07:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:23.890 18:07:49 -- common/autotest_common.sh@10 -- # set +x 00:09:23.890 ************************************ 00:09:23.890 START TEST app_cmdline 00:09:23.890 ************************************ 00:09:23.890 18:07:49 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:23.890 * Looking for test storage... 00:09:23.890 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:23.890 18:07:49 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:23.890 18:07:49 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:09:23.890 18:07:49 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:23.890 18:07:49 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:23.890 18:07:49 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:23.890 18:07:49 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:23.890 18:07:49 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:23.890 18:07:49 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:09:23.890 18:07:49 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:09:23.890 18:07:49 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:09:23.890 18:07:49 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:09:23.890 18:07:49 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:09:23.890 18:07:49 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:09:23.890 18:07:49 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:09:23.890 18:07:49 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:23.890 18:07:49 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:09:23.890 18:07:49 app_cmdline -- scripts/common.sh@345 -- # : 1 00:09:23.890 18:07:49 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:23.890 18:07:49 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:23.890 18:07:49 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:09:23.890 18:07:49 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:09:23.890 18:07:49 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:23.890 18:07:49 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:09:23.890 18:07:49 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:09:23.890 18:07:49 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:09:23.891 18:07:49 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:09:23.891 18:07:49 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:23.891 18:07:49 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:09:23.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.891 18:07:49 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:09:23.891 18:07:49 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:23.891 18:07:49 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:23.891 18:07:49 app_cmdline -- scripts/common.sh@368 -- # return 0 00:09:23.891 18:07:49 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:23.891 18:07:49 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:23.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.891 --rc genhtml_branch_coverage=1 00:09:23.891 --rc genhtml_function_coverage=1 00:09:23.891 --rc genhtml_legend=1 00:09:23.891 --rc geninfo_all_blocks=1 00:09:23.891 --rc geninfo_unexecuted_blocks=1 00:09:23.891 00:09:23.891 ' 00:09:23.891 18:07:49 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:23.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.891 --rc genhtml_branch_coverage=1 00:09:23.891 --rc genhtml_function_coverage=1 00:09:23.891 --rc genhtml_legend=1 00:09:23.891 --rc geninfo_all_blocks=1 00:09:23.891 --rc geninfo_unexecuted_blocks=1 00:09:23.891 00:09:23.891 ' 00:09:23.891 18:07:49 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:23.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.891 --rc genhtml_branch_coverage=1 00:09:23.891 --rc genhtml_function_coverage=1 00:09:23.891 --rc genhtml_legend=1 00:09:23.891 --rc geninfo_all_blocks=1 00:09:23.891 --rc geninfo_unexecuted_blocks=1 00:09:23.891 00:09:23.891 ' 00:09:23.891 18:07:49 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:23.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.891 --rc genhtml_branch_coverage=1 00:09:23.891 --rc genhtml_function_coverage=1 00:09:23.891 --rc genhtml_legend=1 00:09:23.891 --rc geninfo_all_blocks=1 00:09:23.891 --rc geninfo_unexecuted_blocks=1 00:09:23.891 00:09:23.891 ' 00:09:23.891 18:07:49 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:23.891 18:07:49 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59830 00:09:23.891 18:07:49 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59830 00:09:23.891 18:07:49 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:23.891 18:07:49 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59830 ']' 00:09:23.891 18:07:49 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.891 18:07:49 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:23.891 18:07:49 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.891 18:07:49 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:23.891 18:07:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:24.150 [2024-12-06 18:07:49.531360] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:09:24.150 [2024-12-06 18:07:49.531758] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59830 ] 00:09:24.409 [2024-12-06 18:07:49.718099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.409 [2024-12-06 18:07:49.853382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.353 18:07:50 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:25.353 18:07:50 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:09:25.353 18:07:50 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:09:25.624 { 00:09:25.624 "version": "SPDK v25.01-pre git sha1 60adca7e1", 00:09:25.624 "fields": { 00:09:25.624 "major": 25, 00:09:25.624 "minor": 1, 00:09:25.624 "patch": 0, 00:09:25.624 "suffix": "-pre", 00:09:25.624 "commit": "60adca7e1" 00:09:25.624 } 00:09:25.624 } 00:09:25.624 18:07:51 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:09:25.624 18:07:51 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:25.624 18:07:51 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:25.624 18:07:51 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:25.624 18:07:51 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:25.624 18:07:51 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.624 18:07:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:25.624 18:07:51 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:25.624 18:07:51 app_cmdline -- app/cmdline.sh@26 -- # sort 00:09:25.624 18:07:51 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.624 18:07:51 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:25.624 18:07:51 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:25.624 18:07:51 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:25.624 18:07:51 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:09:25.624 18:07:51 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:25.624 18:07:51 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:25.624 18:07:51 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:25.624 18:07:51 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:25.624 18:07:51 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:25.624 18:07:51 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:25.624 18:07:51 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:25.624 18:07:51 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:25.624 18:07:51 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:25.624 18:07:51 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:25.883 request: 00:09:25.883 { 00:09:25.883 "method": "env_dpdk_get_mem_stats", 00:09:25.883 "req_id": 1 00:09:25.883 } 00:09:25.883 Got JSON-RPC error response 00:09:25.883 response: 00:09:25.883 { 00:09:25.883 "code": -32601, 00:09:25.883 "message": "Method not found" 00:09:25.883 } 00:09:26.141 18:07:51 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:09:26.141 18:07:51 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:26.141 18:07:51 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:26.141 18:07:51 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:26.141 18:07:51 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59830 00:09:26.141 18:07:51 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59830 ']' 00:09:26.141 18:07:51 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59830 00:09:26.141 18:07:51 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:09:26.141 18:07:51 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:26.141 18:07:51 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59830 00:09:26.141 killing process with pid 59830 00:09:26.141 18:07:51 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:26.141 18:07:51 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:26.141 18:07:51 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59830' 00:09:26.141 18:07:51 app_cmdline -- common/autotest_common.sh@973 -- # kill 59830 00:09:26.141 18:07:51 app_cmdline -- common/autotest_common.sh@978 -- # wait 59830 00:09:28.674 00:09:28.674 real 0m4.409s 00:09:28.674 user 0m4.902s 00:09:28.674 sys 0m0.641s 00:09:28.674 18:07:53 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.674 ************************************ 00:09:28.674 END TEST app_cmdline 00:09:28.674 ************************************ 00:09:28.674 18:07:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:28.674 18:07:53 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:28.674 18:07:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:28.674 18:07:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.674 18:07:53 -- common/autotest_common.sh@10 -- # set +x 00:09:28.674 ************************************ 00:09:28.674 START TEST version 00:09:28.674 ************************************ 00:09:28.674 18:07:53 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:28.674 * Looking for test storage... 00:09:28.674 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:28.674 18:07:53 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:28.674 18:07:53 version -- common/autotest_common.sh@1711 -- # lcov --version 00:09:28.674 18:07:53 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:28.674 18:07:53 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:28.674 18:07:53 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:28.674 18:07:53 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:28.674 18:07:53 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:28.674 18:07:53 version -- scripts/common.sh@336 -- # IFS=.-: 00:09:28.674 18:07:53 version -- scripts/common.sh@336 -- # read -ra ver1 00:09:28.674 18:07:53 version -- scripts/common.sh@337 -- # IFS=.-: 00:09:28.674 18:07:53 version -- scripts/common.sh@337 -- # read -ra ver2 00:09:28.674 18:07:53 version -- scripts/common.sh@338 -- # local 'op=<' 00:09:28.674 18:07:53 version -- scripts/common.sh@340 -- # ver1_l=2 00:09:28.674 18:07:53 version -- scripts/common.sh@341 -- # ver2_l=1 00:09:28.674 18:07:53 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:28.674 18:07:53 version -- scripts/common.sh@344 -- # case "$op" in 00:09:28.674 18:07:53 version -- scripts/common.sh@345 -- # : 1 00:09:28.674 18:07:53 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:28.674 18:07:53 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:28.674 18:07:53 version -- scripts/common.sh@365 -- # decimal 1 00:09:28.674 18:07:53 version -- scripts/common.sh@353 -- # local d=1 00:09:28.674 18:07:53 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:28.674 18:07:53 version -- scripts/common.sh@355 -- # echo 1 00:09:28.674 18:07:53 version -- scripts/common.sh@365 -- # ver1[v]=1 00:09:28.674 18:07:53 version -- scripts/common.sh@366 -- # decimal 2 00:09:28.674 18:07:53 version -- scripts/common.sh@353 -- # local d=2 00:09:28.674 18:07:53 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:28.674 18:07:53 version -- scripts/common.sh@355 -- # echo 2 00:09:28.674 18:07:53 version -- scripts/common.sh@366 -- # ver2[v]=2 00:09:28.674 18:07:53 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:28.674 18:07:53 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:28.674 18:07:53 version -- scripts/common.sh@368 -- # return 0 00:09:28.674 18:07:53 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:28.674 18:07:53 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:28.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.675 --rc genhtml_branch_coverage=1 00:09:28.675 --rc genhtml_function_coverage=1 00:09:28.675 --rc genhtml_legend=1 00:09:28.675 --rc geninfo_all_blocks=1 00:09:28.675 --rc geninfo_unexecuted_blocks=1 00:09:28.675 00:09:28.675 ' 00:09:28.675 18:07:53 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:28.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.675 --rc genhtml_branch_coverage=1 00:09:28.675 --rc genhtml_function_coverage=1 00:09:28.675 --rc genhtml_legend=1 00:09:28.675 --rc geninfo_all_blocks=1 00:09:28.675 --rc geninfo_unexecuted_blocks=1 00:09:28.675 00:09:28.675 ' 00:09:28.675 18:07:53 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:28.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.675 --rc genhtml_branch_coverage=1 00:09:28.675 --rc genhtml_function_coverage=1 00:09:28.675 --rc genhtml_legend=1 00:09:28.675 --rc geninfo_all_blocks=1 00:09:28.675 --rc geninfo_unexecuted_blocks=1 00:09:28.675 00:09:28.675 ' 00:09:28.675 18:07:53 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:28.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.675 --rc genhtml_branch_coverage=1 00:09:28.675 --rc genhtml_function_coverage=1 00:09:28.675 --rc genhtml_legend=1 00:09:28.675 --rc geninfo_all_blocks=1 00:09:28.675 --rc geninfo_unexecuted_blocks=1 00:09:28.675 00:09:28.675 ' 00:09:28.675 18:07:53 version -- app/version.sh@17 -- # get_header_version major 00:09:28.675 18:07:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:28.675 18:07:53 version -- app/version.sh@14 -- # cut -f2 00:09:28.675 18:07:53 version -- app/version.sh@14 -- # tr -d '"' 00:09:28.675 18:07:53 version -- app/version.sh@17 -- # major=25 00:09:28.675 18:07:53 version -- app/version.sh@18 -- # get_header_version minor 00:09:28.675 18:07:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:28.675 18:07:53 version -- app/version.sh@14 -- # tr -d '"' 00:09:28.675 18:07:53 version -- app/version.sh@14 -- # cut -f2 00:09:28.675 18:07:53 version -- app/version.sh@18 -- # minor=1 00:09:28.675 18:07:53 version -- app/version.sh@19 -- # get_header_version patch 00:09:28.675 18:07:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:28.675 18:07:53 version -- app/version.sh@14 -- # tr -d '"' 00:09:28.675 18:07:53 version -- app/version.sh@14 -- # cut -f2 00:09:28.675 18:07:53 version -- app/version.sh@19 -- # patch=0 00:09:28.675 18:07:53 version -- app/version.sh@20 -- # get_header_version suffix 00:09:28.675 18:07:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:28.675 18:07:53 version -- app/version.sh@14 -- # cut -f2 00:09:28.675 18:07:53 version -- app/version.sh@14 -- # tr -d '"' 00:09:28.675 18:07:53 version -- app/version.sh@20 -- # suffix=-pre 00:09:28.675 18:07:53 version -- app/version.sh@22 -- # version=25.1 00:09:28.675 18:07:53 version -- app/version.sh@25 -- # (( patch != 0 )) 00:09:28.675 18:07:53 version -- app/version.sh@28 -- # version=25.1rc0 00:09:28.675 18:07:53 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:28.675 18:07:53 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:28.675 18:07:53 version -- app/version.sh@30 -- # py_version=25.1rc0 00:09:28.675 18:07:53 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:09:28.675 00:09:28.675 real 0m0.280s 00:09:28.675 user 0m0.165s 00:09:28.675 sys 0m0.132s 00:09:28.675 ************************************ 00:09:28.675 END TEST version 00:09:28.675 ************************************ 00:09:28.675 18:07:53 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.675 18:07:53 version -- common/autotest_common.sh@10 -- # set +x 00:09:28.675 18:07:54 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:09:28.675 18:07:54 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:09:28.675 18:07:54 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:09:28.675 18:07:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:28.675 18:07:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.675 18:07:54 -- common/autotest_common.sh@10 -- # set +x 00:09:28.675 ************************************ 00:09:28.675 START TEST bdev_raid 00:09:28.675 ************************************ 00:09:28.675 18:07:54 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:09:28.675 * Looking for test storage... 00:09:28.675 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:09:28.675 18:07:54 bdev_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:28.675 18:07:54 bdev_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:09:28.675 18:07:54 bdev_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:28.675 18:07:54 bdev_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:28.675 18:07:54 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:28.675 18:07:54 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:28.675 18:07:54 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:28.675 18:07:54 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:09:28.675 18:07:54 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:09:28.675 18:07:54 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:09:28.675 18:07:54 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:09:28.675 18:07:54 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:09:28.675 18:07:54 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:09:28.675 18:07:54 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:09:28.675 18:07:54 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:28.675 18:07:54 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:09:28.675 18:07:54 bdev_raid -- scripts/common.sh@345 -- # : 1 00:09:28.675 18:07:54 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:28.675 18:07:54 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:28.675 18:07:54 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:09:28.675 18:07:54 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:09:28.675 18:07:54 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:28.675 18:07:54 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:09:28.933 18:07:54 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:09:28.933 18:07:54 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:09:28.933 18:07:54 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:09:28.933 18:07:54 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:28.933 18:07:54 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:09:28.933 18:07:54 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:09:28.933 18:07:54 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:28.933 18:07:54 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:28.933 18:07:54 bdev_raid -- scripts/common.sh@368 -- # return 0 00:09:28.933 18:07:54 bdev_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:28.933 18:07:54 bdev_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:28.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.933 --rc genhtml_branch_coverage=1 00:09:28.933 --rc genhtml_function_coverage=1 00:09:28.933 --rc genhtml_legend=1 00:09:28.933 --rc geninfo_all_blocks=1 00:09:28.933 --rc geninfo_unexecuted_blocks=1 00:09:28.933 00:09:28.933 ' 00:09:28.933 18:07:54 bdev_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:28.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.933 --rc genhtml_branch_coverage=1 00:09:28.933 --rc genhtml_function_coverage=1 00:09:28.933 --rc genhtml_legend=1 00:09:28.933 --rc geninfo_all_blocks=1 00:09:28.933 --rc geninfo_unexecuted_blocks=1 00:09:28.933 00:09:28.933 ' 00:09:28.933 18:07:54 bdev_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:28.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.933 --rc genhtml_branch_coverage=1 00:09:28.933 --rc genhtml_function_coverage=1 00:09:28.933 --rc genhtml_legend=1 00:09:28.933 --rc geninfo_all_blocks=1 00:09:28.933 --rc geninfo_unexecuted_blocks=1 00:09:28.933 00:09:28.933 ' 00:09:28.933 18:07:54 bdev_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:28.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.933 --rc genhtml_branch_coverage=1 00:09:28.933 --rc genhtml_function_coverage=1 00:09:28.933 --rc genhtml_legend=1 00:09:28.933 --rc geninfo_all_blocks=1 00:09:28.933 --rc geninfo_unexecuted_blocks=1 00:09:28.933 00:09:28.933 ' 00:09:28.933 18:07:54 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:28.933 18:07:54 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:09:28.933 18:07:54 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:09:28.933 18:07:54 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:09:28.933 18:07:54 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:09:28.933 18:07:54 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:09:28.933 18:07:54 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:09:28.933 18:07:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:28.933 18:07:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.933 18:07:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:28.933 ************************************ 00:09:28.933 START TEST raid1_resize_data_offset_test 00:09:28.933 ************************************ 00:09:28.933 18:07:54 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:09:28.933 18:07:54 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60023 00:09:28.933 18:07:54 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60023' 00:09:28.933 Process raid pid: 60023 00:09:28.933 18:07:54 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:28.933 18:07:54 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60023 00:09:28.933 18:07:54 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 60023 ']' 00:09:28.933 18:07:54 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.933 18:07:54 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:28.933 18:07:54 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.933 18:07:54 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:28.933 18:07:54 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.933 [2024-12-06 18:07:54.327221] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:09:28.934 [2024-12-06 18:07:54.327644] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:29.192 [2024-12-06 18:07:54.513324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.192 [2024-12-06 18:07:54.641988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.449 [2024-12-06 18:07:54.848084] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:29.449 [2024-12-06 18:07:54.848308] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:30.014 18:07:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:30.014 18:07:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:09:30.014 18:07:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:09:30.014 18:07:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.014 18:07:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.014 malloc0 00:09:30.014 18:07:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.014 18:07:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:09:30.014 18:07:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.014 18:07:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.014 malloc1 00:09:30.014 18:07:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.014 18:07:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:09:30.014 18:07:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.014 18:07:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.014 null0 00:09:30.014 18:07:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.014 18:07:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:09:30.014 18:07:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.014 18:07:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.014 [2024-12-06 18:07:55.429818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:09:30.014 [2024-12-06 18:07:55.432654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:30.014 [2024-12-06 18:07:55.432749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:09:30.014 [2024-12-06 18:07:55.432975] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:30.014 [2024-12-06 18:07:55.432997] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:09:30.014 [2024-12-06 18:07:55.433336] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:30.014 [2024-12-06 18:07:55.433722] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:30.014 [2024-12-06 18:07:55.433759] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:09:30.014 [2024-12-06 18:07:55.433997] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:30.014 18:07:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.014 18:07:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.014 18:07:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.014 18:07:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:09:30.014 18:07:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.014 18:07:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.014 18:07:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:09:30.014 18:07:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:09:30.014 18:07:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.014 18:07:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.014 [2024-12-06 18:07:55.490024] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:09:30.014 18:07:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.014 18:07:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:09:30.014 18:07:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.014 18:07:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.580 malloc2 00:09:30.580 18:07:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.580 18:07:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:09:30.580 18:07:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.580 18:07:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.580 [2024-12-06 18:07:56.038718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:30.580 [2024-12-06 18:07:56.055955] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:30.580 18:07:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.580 [2024-12-06 18:07:56.058734] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:09:30.580 18:07:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.580 18:07:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:09:30.580 18:07:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.580 18:07:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.580 18:07:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.839 18:07:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:09:30.839 18:07:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60023 00:09:30.839 18:07:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 60023 ']' 00:09:30.839 18:07:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 60023 00:09:30.839 18:07:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:09:30.839 18:07:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:30.839 18:07:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60023 00:09:30.839 killing process with pid 60023 00:09:30.839 18:07:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:30.839 18:07:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:30.839 18:07:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60023' 00:09:30.839 18:07:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 60023 00:09:30.839 18:07:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 60023 00:09:30.839 [2024-12-06 18:07:56.137617] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:30.839 [2024-12-06 18:07:56.139800] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:09:30.839 [2024-12-06 18:07:56.139881] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:30.839 [2024-12-06 18:07:56.139909] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:09:30.839 [2024-12-06 18:07:56.170480] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:30.839 [2024-12-06 18:07:56.170924] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:30.839 [2024-12-06 18:07:56.170950] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:09:32.741 [2024-12-06 18:07:57.820246] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:33.678 ************************************ 00:09:33.678 END TEST raid1_resize_data_offset_test 00:09:33.678 ************************************ 00:09:33.678 18:07:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:09:33.678 00:09:33.678 real 0m4.664s 00:09:33.678 user 0m4.582s 00:09:33.678 sys 0m0.614s 00:09:33.678 18:07:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.678 18:07:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.678 18:07:58 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:09:33.678 18:07:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:33.678 18:07:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.678 18:07:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:33.678 ************************************ 00:09:33.678 START TEST raid0_resize_superblock_test 00:09:33.678 ************************************ 00:09:33.678 18:07:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:09:33.678 18:07:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:09:33.678 Process raid pid: 60107 00:09:33.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.678 18:07:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60107 00:09:33.678 18:07:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60107' 00:09:33.678 18:07:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60107 00:09:33.678 18:07:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:33.678 18:07:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60107 ']' 00:09:33.678 18:07:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.678 18:07:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:33.678 18:07:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.678 18:07:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:33.678 18:07:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.678 [2024-12-06 18:07:59.037934] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:09:33.679 [2024-12-06 18:07:59.038115] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:33.938 [2024-12-06 18:07:59.222339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.938 [2024-12-06 18:07:59.349127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.219 [2024-12-06 18:07:59.553074] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:34.219 [2024-12-06 18:07:59.553119] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:34.790 18:08:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:34.790 18:08:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:34.790 18:08:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:09:34.790 18:08:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.790 18:08:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.048 malloc0 00:09:35.049 18:08:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.049 18:08:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:09:35.049 18:08:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.049 18:08:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.049 [2024-12-06 18:08:00.546399] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:09:35.049 [2024-12-06 18:08:00.546476] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:35.049 [2024-12-06 18:08:00.546514] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:35.049 [2024-12-06 18:08:00.546537] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:35.049 [2024-12-06 18:08:00.549267] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:35.049 [2024-12-06 18:08:00.549444] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:09:35.049 pt0 00:09:35.049 18:08:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.049 18:08:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:09:35.049 18:08:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.049 18:08:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.308 997b8da8-aec9-4674-92d4-c299382fd19b 00:09:35.308 18:08:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.308 18:08:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:09:35.308 18:08:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.309 18:08:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.309 c3d62916-f6b1-4151-a57a-6c7ed62e8555 00:09:35.309 18:08:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.309 18:08:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:09:35.309 18:08:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.309 18:08:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.309 17d9fcef-a2f8-4ad8-885c-513d089d710b 00:09:35.309 18:08:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.309 18:08:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:09:35.309 18:08:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:09:35.309 18:08:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.309 18:08:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.309 [2024-12-06 18:08:00.689115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev c3d62916-f6b1-4151-a57a-6c7ed62e8555 is claimed 00:09:35.309 [2024-12-06 18:08:00.689226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 17d9fcef-a2f8-4ad8-885c-513d089d710b is claimed 00:09:35.309 [2024-12-06 18:08:00.689411] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:35.309 [2024-12-06 18:08:00.689436] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:09:35.309 [2024-12-06 18:08:00.689761] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:35.309 [2024-12-06 18:08:00.690053] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:35.309 [2024-12-06 18:08:00.690078] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:09:35.309 [2024-12-06 18:08:00.690267] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:35.309 18:08:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.309 18:08:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:09:35.309 18:08:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.309 18:08:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.309 18:08:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:09:35.309 18:08:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.309 18:08:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:09:35.309 18:08:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:09:35.309 18:08:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.309 18:08:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:09:35.309 18:08:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.309 18:08:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.309 18:08:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:09:35.309 18:08:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:35.309 18:08:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:09:35.309 18:08:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:35.309 18:08:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:35.309 18:08:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.309 18:08:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.309 [2024-12-06 18:08:00.809439] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:35.567 18:08:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.567 18:08:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:35.568 18:08:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:35.568 18:08:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:09:35.568 18:08:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:09:35.568 18:08:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.568 18:08:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.568 [2024-12-06 18:08:00.853415] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:35.568 [2024-12-06 18:08:00.853559] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'c3d62916-f6b1-4151-a57a-6c7ed62e8555' was resized: old size 131072, new size 204800 00:09:35.568 18:08:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.568 18:08:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:09:35.568 18:08:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.568 18:08:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.568 [2024-12-06 18:08:00.861296] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:35.568 [2024-12-06 18:08:00.861435] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '17d9fcef-a2f8-4ad8-885c-513d089d710b' was resized: old size 131072, new size 204800 00:09:35.568 [2024-12-06 18:08:00.861593] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:09:35.568 18:08:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.568 18:08:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:09:35.568 18:08:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:09:35.568 18:08:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.568 18:08:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.568 18:08:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.568 18:08:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:09:35.568 18:08:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:09:35.568 18:08:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.568 18:08:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.568 18:08:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:09:35.568 18:08:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.568 18:08:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:09:35.568 18:08:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:35.568 18:08:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:09:35.568 18:08:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:35.568 18:08:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:35.568 18:08:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.568 18:08:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.568 [2024-12-06 18:08:00.973448] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:35.568 18:08:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.568 18:08:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:35.568 18:08:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:35.568 18:08:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:09:35.568 18:08:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:09:35.568 18:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.568 18:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.568 [2024-12-06 18:08:01.021214] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:09:35.568 [2024-12-06 18:08:01.021300] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:09:35.568 [2024-12-06 18:08:01.021330] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:35.568 [2024-12-06 18:08:01.021359] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:09:35.568 [2024-12-06 18:08:01.021485] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:35.568 [2024-12-06 18:08:01.021535] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:35.568 [2024-12-06 18:08:01.021557] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:09:35.568 18:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.568 18:08:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:09:35.568 18:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.568 18:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.568 [2024-12-06 18:08:01.029140] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:09:35.568 [2024-12-06 18:08:01.029203] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:35.568 [2024-12-06 18:08:01.029230] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:09:35.568 [2024-12-06 18:08:01.029248] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:35.568 [2024-12-06 18:08:01.032065] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:35.568 [2024-12-06 18:08:01.032116] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:09:35.568 pt0 00:09:35.568 18:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.568 18:08:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:09:35.568 18:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.568 18:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.568 [2024-12-06 18:08:01.034347] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev c3d62916-f6b1-4151-a57a-6c7ed62e8555 00:09:35.568 [2024-12-06 18:08:01.034424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev c3d62916-f6b1-4151-a57a-6c7ed62e8555 is claimed 00:09:35.568 [2024-12-06 18:08:01.034558] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 17d9fcef-a2f8-4ad8-885c-513d089d710b 00:09:35.568 [2024-12-06 18:08:01.034590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 17d9fcef-a2f8-4ad8-885c-513d089d710b is claimed 00:09:35.568 [2024-12-06 18:08:01.034752] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 17d9fcef-a2f8-4ad8-885c-513d089d710b (2) smaller than existing raid bdev Raid (3) 00:09:35.568 [2024-12-06 18:08:01.034812] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev c3d62916-f6b1-4151-a57a-6c7ed62e8555: File exists 00:09:35.568 [2024-12-06 18:08:01.034862] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:09:35.568 [2024-12-06 18:08:01.034880] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:09:35.568 [2024-12-06 18:08:01.035237] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:35.568 [2024-12-06 18:08:01.035441] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:09:35.568 [2024-12-06 18:08:01.035456] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:09:35.568 [2024-12-06 18:08:01.035638] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:35.568 18:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.568 18:08:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:35.568 18:08:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:35.568 18:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.568 18:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.568 18:08:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:35.568 18:08:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:09:35.568 [2024-12-06 18:08:01.049444] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:35.568 18:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.827 18:08:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:35.827 18:08:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:35.827 18:08:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:09:35.827 18:08:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60107 00:09:35.827 18:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60107 ']' 00:09:35.827 18:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60107 00:09:35.827 18:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:35.827 18:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:35.827 18:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60107 00:09:35.827 killing process with pid 60107 00:09:35.827 18:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:35.827 18:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:35.827 18:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60107' 00:09:35.827 18:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60107 00:09:35.827 [2024-12-06 18:08:01.121480] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:35.827 18:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60107 00:09:35.827 [2024-12-06 18:08:01.121550] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:35.827 [2024-12-06 18:08:01.121605] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:35.827 [2024-12-06 18:08:01.121619] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:09:37.210 [2024-12-06 18:08:02.416140] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:38.148 ************************************ 00:09:38.148 END TEST raid0_resize_superblock_test 00:09:38.148 18:08:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:09:38.148 00:09:38.148 real 0m4.564s 00:09:38.148 user 0m4.893s 00:09:38.148 sys 0m0.597s 00:09:38.148 18:08:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.148 18:08:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.148 ************************************ 00:09:38.148 18:08:03 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:09:38.148 18:08:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:38.148 18:08:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.148 18:08:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:38.148 ************************************ 00:09:38.148 START TEST raid1_resize_superblock_test 00:09:38.148 ************************************ 00:09:38.148 18:08:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:09:38.148 18:08:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:09:38.148 Process raid pid: 60205 00:09:38.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.148 18:08:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60205 00:09:38.148 18:08:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60205' 00:09:38.148 18:08:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:38.149 18:08:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60205 00:09:38.149 18:08:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60205 ']' 00:09:38.149 18:08:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.149 18:08:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:38.149 18:08:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.149 18:08:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:38.149 18:08:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.149 [2024-12-06 18:08:03.639108] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:09:38.149 [2024-12-06 18:08:03.639267] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:38.408 [2024-12-06 18:08:03.813518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.668 [2024-12-06 18:08:03.949451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.668 [2024-12-06 18:08:04.160855] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:38.668 [2024-12-06 18:08:04.160910] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:39.236 18:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:39.236 18:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:39.236 18:08:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:09:39.236 18:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.236 18:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.802 malloc0 00:09:39.802 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.802 18:08:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:09:39.802 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.802 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.802 [2024-12-06 18:08:05.275727] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:09:39.802 [2024-12-06 18:08:05.275824] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:39.802 [2024-12-06 18:08:05.275857] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:39.802 [2024-12-06 18:08:05.275879] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:39.802 [2024-12-06 18:08:05.278747] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:39.802 [2024-12-06 18:08:05.278820] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:09:39.802 pt0 00:09:39.802 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.802 18:08:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:09:39.802 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.802 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.061 8f0a3ed3-ee33-468a-88eb-683b4dd99921 00:09:40.061 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.061 18:08:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:09:40.061 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.061 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.061 aaf800f2-b43e-4d77-9790-0e43057adf73 00:09:40.061 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.061 18:08:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:09:40.061 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.061 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.061 f2fc5137-26be-4db8-bd7d-bf70659e7753 00:09:40.061 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.061 18:08:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:09:40.061 18:08:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:09:40.061 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.061 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.061 [2024-12-06 18:08:05.427608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev aaf800f2-b43e-4d77-9790-0e43057adf73 is claimed 00:09:40.061 [2024-12-06 18:08:05.427736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev f2fc5137-26be-4db8-bd7d-bf70659e7753 is claimed 00:09:40.061 [2024-12-06 18:08:05.427984] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:40.061 [2024-12-06 18:08:05.428009] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:09:40.061 [2024-12-06 18:08:05.428413] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:40.061 [2024-12-06 18:08:05.428804] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:40.061 [2024-12-06 18:08:05.428832] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:09:40.061 [2024-12-06 18:08:05.429039] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:40.061 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.061 18:08:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:09:40.061 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.061 18:08:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:09:40.061 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.061 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.061 18:08:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:09:40.061 18:08:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:09:40.061 18:08:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:09:40.061 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.061 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.061 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.061 18:08:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:09:40.062 18:08:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:40.062 18:08:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:40.062 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.062 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.062 18:08:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:40.062 18:08:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:09:40.062 [2024-12-06 18:08:05.551998] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:40.062 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.321 18:08:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:40.321 18:08:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:40.321 18:08:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:09:40.321 18:08:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:09:40.321 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.321 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.321 [2024-12-06 18:08:05.599957] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:40.321 [2024-12-06 18:08:05.599995] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'aaf800f2-b43e-4d77-9790-0e43057adf73' was resized: old size 131072, new size 204800 00:09:40.321 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.321 18:08:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:09:40.321 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.321 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.321 [2024-12-06 18:08:05.607889] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:40.321 [2024-12-06 18:08:05.607918] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'f2fc5137-26be-4db8-bd7d-bf70659e7753' was resized: old size 131072, new size 204800 00:09:40.321 [2024-12-06 18:08:05.607956] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:09:40.321 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.321 18:08:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:09:40.321 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.321 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.321 18:08:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:09:40.321 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.321 18:08:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:09:40.321 18:08:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:09:40.321 18:08:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:09:40.321 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.321 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.321 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.321 18:08:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:09:40.321 18:08:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:40.321 18:08:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:40.321 18:08:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:40.321 18:08:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:09:40.321 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.321 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.321 [2024-12-06 18:08:05.732013] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:40.321 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.321 18:08:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:40.321 18:08:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:40.321 18:08:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:09:40.321 18:08:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:09:40.321 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.321 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.321 [2024-12-06 18:08:05.779812] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:09:40.321 [2024-12-06 18:08:05.779935] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:09:40.321 [2024-12-06 18:08:05.779976] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:09:40.321 [2024-12-06 18:08:05.780232] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:40.321 [2024-12-06 18:08:05.780554] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:40.321 [2024-12-06 18:08:05.780685] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:40.321 [2024-12-06 18:08:05.780708] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:09:40.321 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.321 18:08:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:09:40.321 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.322 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.322 [2024-12-06 18:08:05.787646] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:09:40.322 [2024-12-06 18:08:05.787874] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:40.322 [2024-12-06 18:08:05.787945] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:09:40.322 [2024-12-06 18:08:05.788057] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:40.322 [2024-12-06 18:08:05.791162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:40.322 [2024-12-06 18:08:05.791332] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:09:40.322 pt0 00:09:40.322 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.322 18:08:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:09:40.322 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.322 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.322 [2024-12-06 18:08:05.793792] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev aaf800f2-b43e-4d77-9790-0e43057adf73 00:09:40.322 [2024-12-06 18:08:05.794057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev aaf800f2-b43e-4d77-9790-0e43057adf73 is claimed 00:09:40.322 [2024-12-06 18:08:05.794335] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev f2fc5137-26be-4db8-bd7d-bf70659e7753 00:09:40.322 [2024-12-06 18:08:05.794503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev f2fc5137-26be-4db8-bd7d-bf70659e7753 is claimed 00:09:40.322 [2024-12-06 18:08:05.794816] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev f2fc5137-26be-4db8-bd7d-bf70659e7753 (2) smaller than existing raid bdev Raid (3) 00:09:40.322 [2024-12-06 18:08:05.794852] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev aaf800f2-b43e-4d77-9790-0e43057adf73: File exists 00:09:40.322 [2024-12-06 18:08:05.794909] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:09:40.322 [2024-12-06 18:08:05.794929] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:40.322 [2024-12-06 18:08:05.795262] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:40.322 [2024-12-06 18:08:05.795476] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:09:40.322 [2024-12-06 18:08:05.795491] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:09:40.322 [2024-12-06 18:08:05.795862] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:40.322 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.322 18:08:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:40.322 18:08:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:40.322 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.322 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.322 18:08:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:40.322 18:08:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:09:40.322 [2024-12-06 18:08:05.808079] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:40.322 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.581 18:08:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:40.581 18:08:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:40.581 18:08:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:09:40.581 18:08:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60205 00:09:40.581 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60205 ']' 00:09:40.581 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60205 00:09:40.581 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:40.581 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:40.581 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60205 00:09:40.581 killing process with pid 60205 00:09:40.581 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:40.581 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:40.581 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60205' 00:09:40.581 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60205 00:09:40.582 18:08:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60205 00:09:40.582 [2024-12-06 18:08:05.888877] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:40.582 [2024-12-06 18:08:05.888983] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:40.582 [2024-12-06 18:08:05.889067] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:40.582 [2024-12-06 18:08:05.889082] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:09:41.960 [2024-12-06 18:08:07.212293] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:42.896 ************************************ 00:09:42.897 END TEST raid1_resize_superblock_test 00:09:42.897 ************************************ 00:09:42.897 18:08:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:09:42.897 00:09:42.897 real 0m4.794s 00:09:42.897 user 0m5.230s 00:09:42.897 sys 0m0.622s 00:09:42.897 18:08:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.897 18:08:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.897 18:08:08 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:09:42.897 18:08:08 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:09:42.897 18:08:08 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:09:42.897 18:08:08 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:09:42.897 18:08:08 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:09:42.897 18:08:08 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:09:42.897 18:08:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:42.897 18:08:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.897 18:08:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:42.897 ************************************ 00:09:42.897 START TEST raid_function_test_raid0 00:09:42.897 ************************************ 00:09:42.897 18:08:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:09:42.897 18:08:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:09:42.897 18:08:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:09:42.897 18:08:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:09:42.897 Process raid pid: 60308 00:09:42.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.897 18:08:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60308 00:09:42.897 18:08:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60308' 00:09:42.897 18:08:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60308 00:09:42.897 18:08:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60308 ']' 00:09:42.897 18:08:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:42.897 18:08:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.897 18:08:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:42.897 18:08:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.897 18:08:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:42.897 18:08:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:43.157 [2024-12-06 18:08:08.517934] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:09:43.157 [2024-12-06 18:08:08.518475] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:43.484 [2024-12-06 18:08:08.709855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.484 [2024-12-06 18:08:08.872361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.743 [2024-12-06 18:08:09.090953] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:43.743 [2024-12-06 18:08:09.091314] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:44.310 18:08:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:44.310 18:08:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:09:44.310 18:08:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:09:44.310 18:08:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.310 18:08:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:44.310 Base_1 00:09:44.310 18:08:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.310 18:08:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:09:44.310 18:08:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.310 18:08:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:44.310 Base_2 00:09:44.310 18:08:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.310 18:08:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:09:44.310 18:08:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.310 18:08:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:44.310 [2024-12-06 18:08:09.646226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:44.310 [2024-12-06 18:08:09.648841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:44.310 [2024-12-06 18:08:09.648974] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:44.310 [2024-12-06 18:08:09.648995] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:44.310 [2024-12-06 18:08:09.649363] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:44.310 [2024-12-06 18:08:09.649577] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:44.310 [2024-12-06 18:08:09.649594] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:09:44.310 [2024-12-06 18:08:09.649988] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:44.310 18:08:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.310 18:08:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:44.310 18:08:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:09:44.310 18:08:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.310 18:08:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:44.310 18:08:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.310 18:08:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:09:44.310 18:08:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:09:44.310 18:08:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:09:44.310 18:08:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:09:44.310 18:08:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:09:44.310 18:08:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:44.310 18:08:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:09:44.310 18:08:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:44.310 18:08:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:09:44.310 18:08:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:44.310 18:08:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:44.310 18:08:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:09:44.567 [2024-12-06 18:08:09.978330] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:44.567 /dev/nbd0 00:09:44.567 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:44.567 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:44.567 18:08:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:44.567 18:08:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:09:44.567 18:08:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:44.567 18:08:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:44.567 18:08:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:44.567 18:08:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:09:44.567 18:08:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:44.567 18:08:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:44.567 18:08:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:44.567 1+0 records in 00:09:44.567 1+0 records out 00:09:44.567 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000691046 s, 5.9 MB/s 00:09:44.567 18:08:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:44.567 18:08:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:09:44.567 18:08:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:44.567 18:08:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:44.567 18:08:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:09:44.567 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:44.567 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:44.567 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:09:44.567 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:09:44.567 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:09:45.133 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:45.133 { 00:09:45.133 "nbd_device": "/dev/nbd0", 00:09:45.133 "bdev_name": "raid" 00:09:45.133 } 00:09:45.133 ]' 00:09:45.133 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:45.133 { 00:09:45.133 "nbd_device": "/dev/nbd0", 00:09:45.133 "bdev_name": "raid" 00:09:45.133 } 00:09:45.133 ]' 00:09:45.133 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:45.133 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:09:45.133 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:45.133 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:09:45.133 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:09:45.133 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:09:45.133 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:09:45.133 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:09:45.133 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:09:45.133 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:09:45.133 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:09:45.133 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:09:45.133 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:09:45.133 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:09:45.133 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:09:45.133 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:09:45.133 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:09:45.133 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:09:45.133 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:09:45.133 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:09:45.133 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:09:45.133 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:09:45.133 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:09:45.133 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:09:45.133 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:09:45.133 4096+0 records in 00:09:45.133 4096+0 records out 00:09:45.133 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0586045 s, 35.8 MB/s 00:09:45.133 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:09:45.390 4096+0 records in 00:09:45.390 4096+0 records out 00:09:45.390 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.339562 s, 6.2 MB/s 00:09:45.390 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:09:45.390 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:45.390 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:09:45.390 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:45.390 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:09:45.390 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:09:45.390 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:09:45.390 128+0 records in 00:09:45.390 128+0 records out 00:09:45.390 65536 bytes (66 kB, 64 KiB) copied, 0.00143109 s, 45.8 MB/s 00:09:45.390 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:09:45.390 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:09:45.390 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:45.390 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:09:45.390 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:45.390 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:09:45.390 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:09:45.390 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:09:45.390 2035+0 records in 00:09:45.390 2035+0 records out 00:09:45.390 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0133555 s, 78.0 MB/s 00:09:45.390 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:09:45.390 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:09:45.390 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:45.648 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:09:45.648 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:45.648 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:09:45.648 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:09:45.648 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:09:45.648 456+0 records in 00:09:45.648 456+0 records out 00:09:45.648 233472 bytes (233 kB, 228 KiB) copied, 0.00302289 s, 77.2 MB/s 00:09:45.648 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:09:45.648 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:09:45.648 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:45.648 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:09:45.648 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:45.648 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:09:45.648 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:09:45.648 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:09:45.648 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:45.648 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:45.648 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:09:45.648 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:45.648 18:08:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:09:45.906 [2024-12-06 18:08:11.270106] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:45.906 18:08:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:45.906 18:08:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:45.906 18:08:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:45.906 18:08:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:45.906 18:08:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:45.906 18:08:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:45.906 18:08:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:09:45.906 18:08:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:09:45.906 18:08:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:09:45.906 18:08:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:09:45.906 18:08:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:09:46.165 18:08:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:46.165 18:08:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:46.165 18:08:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:46.165 18:08:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:46.165 18:08:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:46.165 18:08:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:46.165 18:08:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:09:46.165 18:08:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:09:46.165 18:08:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:46.165 18:08:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:09:46.165 18:08:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:09:46.165 18:08:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60308 00:09:46.165 18:08:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60308 ']' 00:09:46.165 18:08:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60308 00:09:46.165 18:08:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:09:46.165 18:08:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:46.165 18:08:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60308 00:09:46.165 killing process with pid 60308 00:09:46.165 18:08:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:46.165 18:08:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:46.165 18:08:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60308' 00:09:46.165 18:08:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60308 00:09:46.165 [2024-12-06 18:08:11.662042] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:46.165 18:08:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60308 00:09:46.165 [2024-12-06 18:08:11.662160] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:46.165 [2024-12-06 18:08:11.662232] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:46.165 [2024-12-06 18:08:11.662256] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:09:46.424 [2024-12-06 18:08:11.856397] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:47.418 ************************************ 00:09:47.418 END TEST raid_function_test_raid0 00:09:47.418 ************************************ 00:09:47.418 18:08:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:09:47.418 00:09:47.418 real 0m4.526s 00:09:47.418 user 0m5.528s 00:09:47.418 sys 0m1.136s 00:09:47.418 18:08:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:47.418 18:08:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:47.678 18:08:12 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:09:47.678 18:08:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:47.678 18:08:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.678 18:08:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:47.678 ************************************ 00:09:47.678 START TEST raid_function_test_concat 00:09:47.678 ************************************ 00:09:47.678 18:08:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:09:47.678 18:08:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:09:47.678 18:08:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:09:47.678 18:08:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:09:47.678 Process raid pid: 60442 00:09:47.678 18:08:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60442 00:09:47.678 18:08:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60442' 00:09:47.678 18:08:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60442 00:09:47.678 18:08:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:47.678 18:08:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60442 ']' 00:09:47.678 18:08:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.678 18:08:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:47.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.678 18:08:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.678 18:08:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:47.678 18:08:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:47.678 [2024-12-06 18:08:13.076249] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:09:47.678 [2024-12-06 18:08:13.076421] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.937 [2024-12-06 18:08:13.253746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.937 [2024-12-06 18:08:13.387263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.197 [2024-12-06 18:08:13.598386] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:48.197 [2024-12-06 18:08:13.598668] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:48.766 18:08:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:48.766 18:08:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:09:48.766 18:08:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:09:48.766 18:08:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.766 18:08:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:48.766 Base_1 00:09:48.766 18:08:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.766 18:08:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:09:48.766 18:08:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.766 18:08:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:48.766 Base_2 00:09:48.766 18:08:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.766 18:08:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:09:48.766 18:08:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.766 18:08:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:48.766 [2024-12-06 18:08:14.235533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:48.766 [2024-12-06 18:08:14.238017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:48.766 [2024-12-06 18:08:14.238284] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:48.766 [2024-12-06 18:08:14.238315] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:48.766 [2024-12-06 18:08:14.238675] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:48.766 [2024-12-06 18:08:14.238917] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:48.766 [2024-12-06 18:08:14.238935] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:09:48.766 [2024-12-06 18:08:14.239157] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:48.766 18:08:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.766 18:08:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:48.766 18:08:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.766 18:08:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:48.766 18:08:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:09:48.766 18:08:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.026 18:08:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:09:49.026 18:08:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:09:49.026 18:08:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:09:49.026 18:08:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:09:49.026 18:08:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:09:49.026 18:08:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:49.026 18:08:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:09:49.026 18:08:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:49.026 18:08:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:09:49.026 18:08:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:49.026 18:08:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:49.026 18:08:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:09:49.026 [2024-12-06 18:08:14.539681] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:49.285 /dev/nbd0 00:09:49.285 18:08:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:49.285 18:08:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:49.285 18:08:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:49.285 18:08:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:09:49.285 18:08:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:49.285 18:08:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:49.285 18:08:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:49.285 18:08:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:09:49.285 18:08:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:49.285 18:08:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:49.285 18:08:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:49.285 1+0 records in 00:09:49.285 1+0 records out 00:09:49.285 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000370842 s, 11.0 MB/s 00:09:49.285 18:08:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:49.285 18:08:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:09:49.285 18:08:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:49.285 18:08:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:49.285 18:08:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:09:49.285 18:08:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:49.285 18:08:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:49.285 18:08:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:09:49.285 18:08:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:09:49.285 18:08:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:09:49.544 18:08:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:49.544 { 00:09:49.544 "nbd_device": "/dev/nbd0", 00:09:49.544 "bdev_name": "raid" 00:09:49.544 } 00:09:49.544 ]' 00:09:49.544 18:08:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:49.544 { 00:09:49.544 "nbd_device": "/dev/nbd0", 00:09:49.544 "bdev_name": "raid" 00:09:49.544 } 00:09:49.544 ]' 00:09:49.544 18:08:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:49.544 18:08:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:09:49.544 18:08:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:09:49.544 18:08:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:49.544 18:08:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:09:49.544 18:08:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:09:49.544 18:08:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:09:49.544 18:08:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:09:49.544 18:08:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:09:49.544 18:08:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:09:49.544 18:08:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:09:49.544 18:08:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:09:49.544 18:08:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:09:49.544 18:08:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:09:49.544 18:08:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:09:49.544 18:08:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:09:49.544 18:08:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:09:49.544 18:08:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:09:49.544 18:08:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:09:49.544 18:08:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:09:49.544 18:08:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:09:49.544 18:08:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:09:49.544 18:08:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:09:49.544 18:08:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:09:49.544 18:08:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:09:49.544 4096+0 records in 00:09:49.544 4096+0 records out 00:09:49.544 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0312644 s, 67.1 MB/s 00:09:49.544 18:08:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:09:50.117 4096+0 records in 00:09:50.117 4096+0 records out 00:09:50.117 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.337608 s, 6.2 MB/s 00:09:50.117 18:08:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:09:50.117 18:08:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:50.117 18:08:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:09:50.117 18:08:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:50.117 18:08:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:09:50.117 18:08:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:09:50.117 18:08:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:09:50.117 128+0 records in 00:09:50.117 128+0 records out 00:09:50.117 65536 bytes (66 kB, 64 KiB) copied, 0.00126295 s, 51.9 MB/s 00:09:50.117 18:08:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:09:50.117 18:08:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:09:50.117 18:08:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:50.117 18:08:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:09:50.117 18:08:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:50.117 18:08:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:09:50.117 18:08:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:09:50.117 18:08:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:09:50.117 2035+0 records in 00:09:50.117 2035+0 records out 00:09:50.117 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00954029 s, 109 MB/s 00:09:50.117 18:08:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:09:50.117 18:08:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:09:50.117 18:08:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:50.117 18:08:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:09:50.117 18:08:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:50.117 18:08:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:09:50.117 18:08:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:09:50.117 18:08:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:09:50.117 456+0 records in 00:09:50.117 456+0 records out 00:09:50.117 233472 bytes (233 kB, 228 KiB) copied, 0.00299768 s, 77.9 MB/s 00:09:50.117 18:08:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:09:50.117 18:08:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:09:50.117 18:08:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:50.117 18:08:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:09:50.117 18:08:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:50.117 18:08:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:09:50.117 18:08:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:09:50.117 18:08:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:09:50.117 18:08:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:50.117 18:08:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:50.117 18:08:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:09:50.117 18:08:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:50.117 18:08:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:09:50.375 18:08:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:50.375 [2024-12-06 18:08:15.820237] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:50.375 18:08:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:50.375 18:08:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:50.375 18:08:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:50.375 18:08:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:50.375 18:08:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:50.375 18:08:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:09:50.375 18:08:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:09:50.375 18:08:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:09:50.375 18:08:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:09:50.375 18:08:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:09:50.633 18:08:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:50.633 18:08:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:50.633 18:08:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:50.894 18:08:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:50.894 18:08:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:50.894 18:08:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:50.894 18:08:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:09:50.894 18:08:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:09:50.894 18:08:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:50.894 18:08:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:09:50.894 18:08:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:09:50.894 18:08:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60442 00:09:50.894 18:08:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60442 ']' 00:09:50.894 18:08:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60442 00:09:50.894 18:08:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:09:50.894 18:08:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:50.894 18:08:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60442 00:09:50.894 killing process with pid 60442 00:09:50.894 18:08:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:50.895 18:08:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:50.895 18:08:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60442' 00:09:50.895 18:08:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60442 00:09:50.895 [2024-12-06 18:08:16.195006] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:50.895 18:08:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60442 00:09:50.895 [2024-12-06 18:08:16.195137] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:50.895 [2024-12-06 18:08:16.195211] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:50.895 [2024-12-06 18:08:16.195230] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:09:50.895 [2024-12-06 18:08:16.390888] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:52.285 18:08:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:09:52.285 00:09:52.285 real 0m4.493s 00:09:52.285 user 0m5.550s 00:09:52.285 sys 0m1.040s 00:09:52.285 18:08:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:52.285 18:08:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:52.285 ************************************ 00:09:52.285 END TEST raid_function_test_concat 00:09:52.285 ************************************ 00:09:52.285 18:08:17 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:09:52.285 18:08:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:52.285 18:08:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:52.285 18:08:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:52.285 ************************************ 00:09:52.285 START TEST raid0_resize_test 00:09:52.285 ************************************ 00:09:52.285 18:08:17 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:09:52.285 18:08:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:09:52.285 18:08:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:09:52.285 18:08:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:09:52.285 18:08:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:09:52.285 18:08:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:09:52.285 18:08:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:09:52.285 18:08:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:09:52.285 18:08:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:09:52.285 Process raid pid: 60577 00:09:52.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.285 18:08:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60577 00:09:52.285 18:08:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60577' 00:09:52.285 18:08:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:52.285 18:08:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60577 00:09:52.285 18:08:17 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60577 ']' 00:09:52.285 18:08:17 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.285 18:08:17 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:52.285 18:08:17 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.285 18:08:17 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:52.285 18:08:17 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.285 [2024-12-06 18:08:17.642262] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:09:52.285 [2024-12-06 18:08:17.642690] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:52.544 [2024-12-06 18:08:17.830450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.544 [2024-12-06 18:08:17.965659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.803 [2024-12-06 18:08:18.176520] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:52.803 [2024-12-06 18:08:18.176715] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.371 18:08:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:53.371 18:08:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:09:53.371 18:08:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:09:53.371 18:08:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.371 18:08:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.371 Base_1 00:09:53.371 18:08:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.371 18:08:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:09:53.371 18:08:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.371 18:08:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.371 Base_2 00:09:53.372 18:08:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.372 18:08:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:09:53.372 18:08:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:09:53.372 18:08:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.372 18:08:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.372 [2024-12-06 18:08:18.714462] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:53.372 [2024-12-06 18:08:18.717056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:53.372 [2024-12-06 18:08:18.717318] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:53.372 [2024-12-06 18:08:18.717352] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:53.372 [2024-12-06 18:08:18.717720] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:53.372 [2024-12-06 18:08:18.717912] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:53.372 [2024-12-06 18:08:18.717930] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:09:53.372 [2024-12-06 18:08:18.718144] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:53.372 18:08:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.372 18:08:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:09:53.372 18:08:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.372 18:08:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.372 [2024-12-06 18:08:18.722443] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:53.372 [2024-12-06 18:08:18.722482] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:09:53.372 true 00:09:53.372 18:08:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.372 18:08:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:53.372 18:08:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.372 18:08:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.372 18:08:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:09:53.372 [2024-12-06 18:08:18.734672] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:53.372 18:08:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.372 18:08:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:09:53.372 18:08:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:09:53.372 18:08:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:09:53.372 18:08:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:09:53.372 18:08:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:09:53.372 18:08:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:09:53.372 18:08:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.372 18:08:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.372 [2024-12-06 18:08:18.790423] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:53.372 [2024-12-06 18:08:18.790459] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:09:53.372 [2024-12-06 18:08:18.790506] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:09:53.372 true 00:09:53.372 18:08:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.372 18:08:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:53.372 18:08:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:09:53.372 18:08:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.372 18:08:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.372 [2024-12-06 18:08:18.802677] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:53.372 18:08:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.372 18:08:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:09:53.372 18:08:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:09:53.372 18:08:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:09:53.372 18:08:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:09:53.372 18:08:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:09:53.372 18:08:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60577 00:09:53.372 18:08:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60577 ']' 00:09:53.372 18:08:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60577 00:09:53.372 18:08:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:09:53.372 18:08:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:53.372 18:08:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60577 00:09:53.372 killing process with pid 60577 00:09:53.372 18:08:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:53.372 18:08:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:53.372 18:08:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60577' 00:09:53.372 18:08:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60577 00:09:53.372 [2024-12-06 18:08:18.874418] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:53.372 18:08:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60577 00:09:53.372 [2024-12-06 18:08:18.874526] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:53.372 [2024-12-06 18:08:18.874595] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:53.372 [2024-12-06 18:08:18.874611] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:09:53.631 [2024-12-06 18:08:18.890672] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:54.569 18:08:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:09:54.569 00:09:54.569 real 0m2.420s 00:09:54.569 user 0m2.681s 00:09:54.569 sys 0m0.434s 00:09:54.569 18:08:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:54.569 18:08:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.569 ************************************ 00:09:54.569 END TEST raid0_resize_test 00:09:54.569 ************************************ 00:09:54.569 18:08:19 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:09:54.569 18:08:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:54.569 18:08:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:54.569 18:08:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:54.569 ************************************ 00:09:54.569 START TEST raid1_resize_test 00:09:54.569 ************************************ 00:09:54.569 18:08:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:09:54.569 18:08:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:09:54.569 18:08:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:09:54.569 18:08:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:09:54.569 18:08:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:09:54.569 18:08:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:09:54.569 18:08:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:09:54.569 18:08:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:09:54.569 18:08:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:09:54.569 Process raid pid: 60633 00:09:54.569 18:08:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60633 00:09:54.569 18:08:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60633' 00:09:54.569 18:08:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60633 00:09:54.569 18:08:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:54.569 18:08:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60633 ']' 00:09:54.569 18:08:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.569 18:08:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:54.569 18:08:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.569 18:08:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:54.569 18:08:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.831 [2024-12-06 18:08:20.117053] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:09:54.831 [2024-12-06 18:08:20.117501] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:54.831 [2024-12-06 18:08:20.307407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.108 [2024-12-06 18:08:20.443089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.367 [2024-12-06 18:08:20.655023] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:55.367 [2024-12-06 18:08:20.655081] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:55.933 18:08:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:55.933 18:08:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:09:55.933 18:08:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:09:55.933 18:08:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.933 18:08:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.933 Base_1 00:09:55.933 18:08:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.933 18:08:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:09:55.933 18:08:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.933 18:08:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.933 Base_2 00:09:55.933 18:08:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.933 18:08:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:09:55.933 18:08:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:09:55.933 18:08:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.933 18:08:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.933 [2024-12-06 18:08:21.194331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:55.933 [2024-12-06 18:08:21.196909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:55.933 [2024-12-06 18:08:21.196996] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:55.933 [2024-12-06 18:08:21.197015] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:55.933 [2024-12-06 18:08:21.197345] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:55.933 [2024-12-06 18:08:21.197538] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:55.933 [2024-12-06 18:08:21.197554] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:09:55.933 [2024-12-06 18:08:21.197754] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:55.933 18:08:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.933 18:08:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:09:55.933 18:08:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.933 18:08:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.933 [2024-12-06 18:08:21.202318] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:55.933 [2024-12-06 18:08:21.202488] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:09:55.933 true 00:09:55.933 18:08:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.933 18:08:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:09:55.933 18:08:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:55.933 18:08:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.933 18:08:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.933 [2024-12-06 18:08:21.214573] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:55.933 18:08:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.933 18:08:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:09:55.933 18:08:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:09:55.933 18:08:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:09:55.933 18:08:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:09:55.933 18:08:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:09:55.933 18:08:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:09:55.933 18:08:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.933 18:08:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.933 [2024-12-06 18:08:21.262359] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:55.933 [2024-12-06 18:08:21.262400] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:09:55.933 [2024-12-06 18:08:21.262451] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:09:55.933 true 00:09:55.933 18:08:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.933 18:08:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:55.933 18:08:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:09:55.933 18:08:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.933 18:08:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.933 [2024-12-06 18:08:21.274555] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:55.933 18:08:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.933 18:08:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:09:55.933 18:08:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:09:55.933 18:08:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:09:55.933 18:08:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:09:55.933 18:08:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:09:55.934 18:08:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60633 00:09:55.934 18:08:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60633 ']' 00:09:55.934 18:08:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60633 00:09:55.934 18:08:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:09:55.934 18:08:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:55.934 18:08:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60633 00:09:55.934 18:08:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:55.934 killing process with pid 60633 00:09:55.934 18:08:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:55.934 18:08:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60633' 00:09:55.934 18:08:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60633 00:09:55.934 18:08:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60633 00:09:55.934 [2024-12-06 18:08:21.361730] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:55.934 [2024-12-06 18:08:21.361849] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:55.934 [2024-12-06 18:08:21.362479] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:55.934 [2024-12-06 18:08:21.362510] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:09:55.934 [2024-12-06 18:08:21.377834] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:57.371 ************************************ 00:09:57.371 END TEST raid1_resize_test 00:09:57.371 ************************************ 00:09:57.371 18:08:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:09:57.371 00:09:57.371 real 0m2.454s 00:09:57.371 user 0m2.785s 00:09:57.371 sys 0m0.386s 00:09:57.371 18:08:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:57.371 18:08:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.371 18:08:22 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:57.371 18:08:22 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:57.371 18:08:22 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:09:57.371 18:08:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:57.371 18:08:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.371 18:08:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:57.371 ************************************ 00:09:57.371 START TEST raid_state_function_test 00:09:57.371 ************************************ 00:09:57.371 18:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:09:57.371 18:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:57.371 18:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:57.371 18:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:57.371 18:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:57.371 18:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:57.371 18:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.371 18:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:57.371 18:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:57.371 18:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.371 18:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:57.371 18:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:57.371 18:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.371 18:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:57.371 18:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:57.371 18:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:57.371 18:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:57.371 18:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:57.371 18:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:57.371 18:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:57.371 18:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:57.372 18:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:57.372 18:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:57.372 18:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:57.372 Process raid pid: 60701 00:09:57.372 18:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60701 00:09:57.372 18:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60701' 00:09:57.372 18:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60701 00:09:57.372 18:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:57.372 18:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60701 ']' 00:09:57.372 18:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.372 18:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:57.372 18:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.372 18:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:57.372 18:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.372 [2024-12-06 18:08:22.634359] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:09:57.372 [2024-12-06 18:08:22.634548] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:57.372 [2024-12-06 18:08:22.838484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.631 [2024-12-06 18:08:23.003201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.889 [2024-12-06 18:08:23.260049] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:57.889 [2024-12-06 18:08:23.260097] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:58.456 18:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:58.456 18:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:58.456 18:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:58.456 18:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.456 18:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.456 [2024-12-06 18:08:23.698506] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:58.456 [2024-12-06 18:08:23.698742] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:58.456 [2024-12-06 18:08:23.698945] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:58.456 [2024-12-06 18:08:23.698984] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:58.456 18:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.456 18:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:58.456 18:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.456 18:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.456 18:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:58.456 18:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.456 18:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:58.456 18:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.456 18:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.456 18:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.456 18:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.456 18:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.456 18:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.456 18:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.456 18:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.456 18:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.456 18:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.456 "name": "Existed_Raid", 00:09:58.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.456 "strip_size_kb": 64, 00:09:58.456 "state": "configuring", 00:09:58.456 "raid_level": "raid0", 00:09:58.456 "superblock": false, 00:09:58.456 "num_base_bdevs": 2, 00:09:58.456 "num_base_bdevs_discovered": 0, 00:09:58.456 "num_base_bdevs_operational": 2, 00:09:58.456 "base_bdevs_list": [ 00:09:58.456 { 00:09:58.456 "name": "BaseBdev1", 00:09:58.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.456 "is_configured": false, 00:09:58.456 "data_offset": 0, 00:09:58.456 "data_size": 0 00:09:58.456 }, 00:09:58.456 { 00:09:58.456 "name": "BaseBdev2", 00:09:58.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.456 "is_configured": false, 00:09:58.456 "data_offset": 0, 00:09:58.456 "data_size": 0 00:09:58.456 } 00:09:58.456 ] 00:09:58.456 }' 00:09:58.456 18:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.456 18:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.714 18:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:58.714 18:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.714 18:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.714 [2024-12-06 18:08:24.222605] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:58.714 [2024-12-06 18:08:24.222653] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:58.714 18:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.714 18:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:58.714 18:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.714 18:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.972 [2024-12-06 18:08:24.234626] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:58.972 [2024-12-06 18:08:24.234692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:58.972 [2024-12-06 18:08:24.234709] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:58.972 [2024-12-06 18:08:24.234729] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:58.972 18:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.972 18:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:58.972 18:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.972 18:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.972 [2024-12-06 18:08:24.280738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:58.972 BaseBdev1 00:09:58.972 18:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.972 18:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:58.972 18:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:58.972 18:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:58.972 18:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:58.972 18:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:58.972 18:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:58.972 18:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:58.972 18:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.972 18:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.972 18:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.972 18:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:58.972 18:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.972 18:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.972 [ 00:09:58.972 { 00:09:58.972 "name": "BaseBdev1", 00:09:58.972 "aliases": [ 00:09:58.972 "bb27d138-5b23-4500-a0c1-f5b1f6df9298" 00:09:58.972 ], 00:09:58.972 "product_name": "Malloc disk", 00:09:58.972 "block_size": 512, 00:09:58.972 "num_blocks": 65536, 00:09:58.972 "uuid": "bb27d138-5b23-4500-a0c1-f5b1f6df9298", 00:09:58.972 "assigned_rate_limits": { 00:09:58.972 "rw_ios_per_sec": 0, 00:09:58.972 "rw_mbytes_per_sec": 0, 00:09:58.972 "r_mbytes_per_sec": 0, 00:09:58.972 "w_mbytes_per_sec": 0 00:09:58.972 }, 00:09:58.972 "claimed": true, 00:09:58.972 "claim_type": "exclusive_write", 00:09:58.972 "zoned": false, 00:09:58.972 "supported_io_types": { 00:09:58.972 "read": true, 00:09:58.972 "write": true, 00:09:58.972 "unmap": true, 00:09:58.972 "flush": true, 00:09:58.972 "reset": true, 00:09:58.972 "nvme_admin": false, 00:09:58.972 "nvme_io": false, 00:09:58.972 "nvme_io_md": false, 00:09:58.972 "write_zeroes": true, 00:09:58.972 "zcopy": true, 00:09:58.972 "get_zone_info": false, 00:09:58.972 "zone_management": false, 00:09:58.972 "zone_append": false, 00:09:58.972 "compare": false, 00:09:58.972 "compare_and_write": false, 00:09:58.972 "abort": true, 00:09:58.972 "seek_hole": false, 00:09:58.972 "seek_data": false, 00:09:58.972 "copy": true, 00:09:58.972 "nvme_iov_md": false 00:09:58.972 }, 00:09:58.972 "memory_domains": [ 00:09:58.972 { 00:09:58.972 "dma_device_id": "system", 00:09:58.972 "dma_device_type": 1 00:09:58.972 }, 00:09:58.972 { 00:09:58.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.972 "dma_device_type": 2 00:09:58.972 } 00:09:58.972 ], 00:09:58.972 "driver_specific": {} 00:09:58.972 } 00:09:58.972 ] 00:09:58.972 18:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.972 18:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:58.972 18:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:58.972 18:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.972 18:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.972 18:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:58.972 18:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.972 18:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:58.972 18:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.972 18:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.972 18:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.972 18:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.972 18:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.972 18:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.972 18:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.972 18:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.972 18:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.972 18:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.972 "name": "Existed_Raid", 00:09:58.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.972 "strip_size_kb": 64, 00:09:58.972 "state": "configuring", 00:09:58.972 "raid_level": "raid0", 00:09:58.972 "superblock": false, 00:09:58.972 "num_base_bdevs": 2, 00:09:58.972 "num_base_bdevs_discovered": 1, 00:09:58.972 "num_base_bdevs_operational": 2, 00:09:58.972 "base_bdevs_list": [ 00:09:58.972 { 00:09:58.972 "name": "BaseBdev1", 00:09:58.972 "uuid": "bb27d138-5b23-4500-a0c1-f5b1f6df9298", 00:09:58.972 "is_configured": true, 00:09:58.972 "data_offset": 0, 00:09:58.972 "data_size": 65536 00:09:58.972 }, 00:09:58.972 { 00:09:58.972 "name": "BaseBdev2", 00:09:58.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.972 "is_configured": false, 00:09:58.972 "data_offset": 0, 00:09:58.972 "data_size": 0 00:09:58.972 } 00:09:58.972 ] 00:09:58.972 }' 00:09:58.972 18:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.972 18:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.601 18:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:59.601 18:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.601 18:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.601 [2024-12-06 18:08:24.848987] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:59.601 [2024-12-06 18:08:24.849051] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:59.601 18:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.601 18:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:59.601 18:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.601 18:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.601 [2024-12-06 18:08:24.857031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:59.601 [2024-12-06 18:08:24.859708] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:59.601 [2024-12-06 18:08:24.859817] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:59.601 18:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.601 18:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:59.601 18:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:59.601 18:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:59.601 18:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.601 18:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.601 18:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:59.601 18:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.601 18:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:59.601 18:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.601 18:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.601 18:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.601 18:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.602 18:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.602 18:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.602 18:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.602 18:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.602 18:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.602 18:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.602 "name": "Existed_Raid", 00:09:59.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.602 "strip_size_kb": 64, 00:09:59.602 "state": "configuring", 00:09:59.602 "raid_level": "raid0", 00:09:59.602 "superblock": false, 00:09:59.602 "num_base_bdevs": 2, 00:09:59.602 "num_base_bdevs_discovered": 1, 00:09:59.602 "num_base_bdevs_operational": 2, 00:09:59.602 "base_bdevs_list": [ 00:09:59.602 { 00:09:59.602 "name": "BaseBdev1", 00:09:59.602 "uuid": "bb27d138-5b23-4500-a0c1-f5b1f6df9298", 00:09:59.602 "is_configured": true, 00:09:59.602 "data_offset": 0, 00:09:59.602 "data_size": 65536 00:09:59.602 }, 00:09:59.602 { 00:09:59.602 "name": "BaseBdev2", 00:09:59.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.602 "is_configured": false, 00:09:59.602 "data_offset": 0, 00:09:59.602 "data_size": 0 00:09:59.602 } 00:09:59.602 ] 00:09:59.602 }' 00:09:59.602 18:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.602 18:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.180 18:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:00.180 18:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.180 18:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.180 [2024-12-06 18:08:25.442427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:00.180 [2024-12-06 18:08:25.442498] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:00.180 [2024-12-06 18:08:25.442513] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:00.180 [2024-12-06 18:08:25.442972] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:00.180 [2024-12-06 18:08:25.443215] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:00.180 [2024-12-06 18:08:25.443238] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:00.180 BaseBdev2 00:10:00.180 [2024-12-06 18:08:25.443563] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:00.180 18:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.180 18:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:00.180 18:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:00.180 18:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:00.180 18:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:00.180 18:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:00.180 18:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:00.180 18:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:00.180 18:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.180 18:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.180 18:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.180 18:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:00.180 18:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.180 18:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.180 [ 00:10:00.180 { 00:10:00.180 "name": "BaseBdev2", 00:10:00.180 "aliases": [ 00:10:00.180 "bc1747b6-9cf7-401f-818b-4643ec36ce40" 00:10:00.180 ], 00:10:00.180 "product_name": "Malloc disk", 00:10:00.180 "block_size": 512, 00:10:00.180 "num_blocks": 65536, 00:10:00.180 "uuid": "bc1747b6-9cf7-401f-818b-4643ec36ce40", 00:10:00.181 "assigned_rate_limits": { 00:10:00.181 "rw_ios_per_sec": 0, 00:10:00.181 "rw_mbytes_per_sec": 0, 00:10:00.181 "r_mbytes_per_sec": 0, 00:10:00.181 "w_mbytes_per_sec": 0 00:10:00.181 }, 00:10:00.181 "claimed": true, 00:10:00.181 "claim_type": "exclusive_write", 00:10:00.181 "zoned": false, 00:10:00.181 "supported_io_types": { 00:10:00.181 "read": true, 00:10:00.181 "write": true, 00:10:00.181 "unmap": true, 00:10:00.181 "flush": true, 00:10:00.181 "reset": true, 00:10:00.181 "nvme_admin": false, 00:10:00.181 "nvme_io": false, 00:10:00.181 "nvme_io_md": false, 00:10:00.181 "write_zeroes": true, 00:10:00.181 "zcopy": true, 00:10:00.181 "get_zone_info": false, 00:10:00.181 "zone_management": false, 00:10:00.181 "zone_append": false, 00:10:00.181 "compare": false, 00:10:00.181 "compare_and_write": false, 00:10:00.181 "abort": true, 00:10:00.181 "seek_hole": false, 00:10:00.181 "seek_data": false, 00:10:00.181 "copy": true, 00:10:00.181 "nvme_iov_md": false 00:10:00.181 }, 00:10:00.181 "memory_domains": [ 00:10:00.181 { 00:10:00.181 "dma_device_id": "system", 00:10:00.181 "dma_device_type": 1 00:10:00.181 }, 00:10:00.181 { 00:10:00.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.181 "dma_device_type": 2 00:10:00.181 } 00:10:00.181 ], 00:10:00.181 "driver_specific": {} 00:10:00.181 } 00:10:00.181 ] 00:10:00.181 18:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.181 18:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:00.181 18:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:00.181 18:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:00.181 18:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:10:00.181 18:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.181 18:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:00.181 18:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:00.181 18:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.181 18:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:00.181 18:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.181 18:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.181 18:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.181 18:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.181 18:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.181 18:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.181 18:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.181 18:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.181 18:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.181 18:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.181 "name": "Existed_Raid", 00:10:00.181 "uuid": "1bf05116-f889-48f5-9a46-03cc8e45867d", 00:10:00.181 "strip_size_kb": 64, 00:10:00.181 "state": "online", 00:10:00.181 "raid_level": "raid0", 00:10:00.181 "superblock": false, 00:10:00.181 "num_base_bdevs": 2, 00:10:00.181 "num_base_bdevs_discovered": 2, 00:10:00.181 "num_base_bdevs_operational": 2, 00:10:00.181 "base_bdevs_list": [ 00:10:00.181 { 00:10:00.181 "name": "BaseBdev1", 00:10:00.181 "uuid": "bb27d138-5b23-4500-a0c1-f5b1f6df9298", 00:10:00.181 "is_configured": true, 00:10:00.181 "data_offset": 0, 00:10:00.181 "data_size": 65536 00:10:00.181 }, 00:10:00.181 { 00:10:00.181 "name": "BaseBdev2", 00:10:00.181 "uuid": "bc1747b6-9cf7-401f-818b-4643ec36ce40", 00:10:00.181 "is_configured": true, 00:10:00.181 "data_offset": 0, 00:10:00.181 "data_size": 65536 00:10:00.181 } 00:10:00.181 ] 00:10:00.181 }' 00:10:00.181 18:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.181 18:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.745 18:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:00.745 18:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:00.745 18:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:00.745 18:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:00.745 18:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:00.745 18:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:00.745 18:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:00.745 18:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:00.745 18:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.745 18:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.745 [2024-12-06 18:08:26.011032] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:00.745 18:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.745 18:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:00.745 "name": "Existed_Raid", 00:10:00.745 "aliases": [ 00:10:00.745 "1bf05116-f889-48f5-9a46-03cc8e45867d" 00:10:00.745 ], 00:10:00.745 "product_name": "Raid Volume", 00:10:00.745 "block_size": 512, 00:10:00.745 "num_blocks": 131072, 00:10:00.745 "uuid": "1bf05116-f889-48f5-9a46-03cc8e45867d", 00:10:00.745 "assigned_rate_limits": { 00:10:00.745 "rw_ios_per_sec": 0, 00:10:00.745 "rw_mbytes_per_sec": 0, 00:10:00.745 "r_mbytes_per_sec": 0, 00:10:00.745 "w_mbytes_per_sec": 0 00:10:00.745 }, 00:10:00.745 "claimed": false, 00:10:00.745 "zoned": false, 00:10:00.745 "supported_io_types": { 00:10:00.745 "read": true, 00:10:00.745 "write": true, 00:10:00.745 "unmap": true, 00:10:00.745 "flush": true, 00:10:00.745 "reset": true, 00:10:00.745 "nvme_admin": false, 00:10:00.745 "nvme_io": false, 00:10:00.745 "nvme_io_md": false, 00:10:00.745 "write_zeroes": true, 00:10:00.745 "zcopy": false, 00:10:00.745 "get_zone_info": false, 00:10:00.745 "zone_management": false, 00:10:00.745 "zone_append": false, 00:10:00.745 "compare": false, 00:10:00.745 "compare_and_write": false, 00:10:00.745 "abort": false, 00:10:00.745 "seek_hole": false, 00:10:00.745 "seek_data": false, 00:10:00.745 "copy": false, 00:10:00.745 "nvme_iov_md": false 00:10:00.745 }, 00:10:00.745 "memory_domains": [ 00:10:00.745 { 00:10:00.745 "dma_device_id": "system", 00:10:00.745 "dma_device_type": 1 00:10:00.745 }, 00:10:00.745 { 00:10:00.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.745 "dma_device_type": 2 00:10:00.745 }, 00:10:00.745 { 00:10:00.745 "dma_device_id": "system", 00:10:00.745 "dma_device_type": 1 00:10:00.745 }, 00:10:00.745 { 00:10:00.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.745 "dma_device_type": 2 00:10:00.745 } 00:10:00.745 ], 00:10:00.745 "driver_specific": { 00:10:00.745 "raid": { 00:10:00.745 "uuid": "1bf05116-f889-48f5-9a46-03cc8e45867d", 00:10:00.745 "strip_size_kb": 64, 00:10:00.745 "state": "online", 00:10:00.745 "raid_level": "raid0", 00:10:00.745 "superblock": false, 00:10:00.745 "num_base_bdevs": 2, 00:10:00.745 "num_base_bdevs_discovered": 2, 00:10:00.745 "num_base_bdevs_operational": 2, 00:10:00.745 "base_bdevs_list": [ 00:10:00.745 { 00:10:00.745 "name": "BaseBdev1", 00:10:00.745 "uuid": "bb27d138-5b23-4500-a0c1-f5b1f6df9298", 00:10:00.745 "is_configured": true, 00:10:00.745 "data_offset": 0, 00:10:00.745 "data_size": 65536 00:10:00.745 }, 00:10:00.745 { 00:10:00.745 "name": "BaseBdev2", 00:10:00.745 "uuid": "bc1747b6-9cf7-401f-818b-4643ec36ce40", 00:10:00.745 "is_configured": true, 00:10:00.745 "data_offset": 0, 00:10:00.745 "data_size": 65536 00:10:00.745 } 00:10:00.745 ] 00:10:00.745 } 00:10:00.745 } 00:10:00.745 }' 00:10:00.745 18:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:00.745 18:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:00.745 BaseBdev2' 00:10:00.745 18:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.745 18:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:00.745 18:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:00.745 18:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:00.745 18:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.745 18:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.745 18:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.745 18:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.745 18:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:00.745 18:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:00.745 18:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:00.745 18:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:00.745 18:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.746 18:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.746 18:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.746 18:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.003 18:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.003 18:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.003 18:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:01.003 18:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.003 18:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.003 [2024-12-06 18:08:26.270873] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:01.003 [2024-12-06 18:08:26.270925] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:01.003 [2024-12-06 18:08:26.271002] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:01.003 18:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.003 18:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:01.003 18:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:01.003 18:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:01.003 18:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:01.003 18:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:01.003 18:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:10:01.003 18:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.003 18:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:01.003 18:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:01.003 18:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.003 18:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:01.003 18:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.003 18:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.003 18:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.003 18:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.003 18:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.003 18:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.003 18:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.003 18:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.003 18:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.003 18:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.003 "name": "Existed_Raid", 00:10:01.003 "uuid": "1bf05116-f889-48f5-9a46-03cc8e45867d", 00:10:01.003 "strip_size_kb": 64, 00:10:01.003 "state": "offline", 00:10:01.003 "raid_level": "raid0", 00:10:01.003 "superblock": false, 00:10:01.003 "num_base_bdevs": 2, 00:10:01.003 "num_base_bdevs_discovered": 1, 00:10:01.003 "num_base_bdevs_operational": 1, 00:10:01.003 "base_bdevs_list": [ 00:10:01.003 { 00:10:01.003 "name": null, 00:10:01.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.003 "is_configured": false, 00:10:01.003 "data_offset": 0, 00:10:01.003 "data_size": 65536 00:10:01.003 }, 00:10:01.003 { 00:10:01.003 "name": "BaseBdev2", 00:10:01.003 "uuid": "bc1747b6-9cf7-401f-818b-4643ec36ce40", 00:10:01.003 "is_configured": true, 00:10:01.003 "data_offset": 0, 00:10:01.003 "data_size": 65536 00:10:01.003 } 00:10:01.003 ] 00:10:01.003 }' 00:10:01.003 18:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.003 18:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.569 18:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:01.569 18:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:01.569 18:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.569 18:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:01.569 18:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.569 18:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.569 18:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.569 18:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:01.569 18:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:01.569 18:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:01.569 18:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.569 18:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.569 [2024-12-06 18:08:26.946664] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:01.569 [2024-12-06 18:08:26.946748] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:01.569 18:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.569 18:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:01.569 18:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:01.569 18:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.569 18:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.569 18:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.569 18:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:01.569 18:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.840 18:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:01.840 18:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:01.840 18:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:01.840 18:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60701 00:10:01.840 18:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60701 ']' 00:10:01.840 18:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60701 00:10:01.840 18:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:01.840 18:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:01.840 18:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60701 00:10:01.840 killing process with pid 60701 00:10:01.840 18:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:01.840 18:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:01.840 18:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60701' 00:10:01.840 18:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60701 00:10:01.840 [2024-12-06 18:08:27.141467] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:01.840 18:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60701 00:10:01.840 [2024-12-06 18:08:27.157417] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:03.216 18:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:03.216 00:10:03.216 real 0m5.871s 00:10:03.216 user 0m8.734s 00:10:03.216 sys 0m0.855s 00:10:03.216 18:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.216 ************************************ 00:10:03.216 END TEST raid_state_function_test 00:10:03.216 ************************************ 00:10:03.216 18:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.216 18:08:28 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:10:03.216 18:08:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:03.216 18:08:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:03.216 18:08:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:03.216 ************************************ 00:10:03.216 START TEST raid_state_function_test_sb 00:10:03.216 ************************************ 00:10:03.216 18:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:10:03.216 18:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:03.216 18:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:03.216 18:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:03.216 18:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:03.216 18:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:03.217 18:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:03.217 18:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:03.217 18:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:03.217 18:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:03.217 18:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:03.217 18:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:03.217 18:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:03.217 18:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:03.217 18:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:03.217 18:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:03.217 18:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:03.217 18:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:03.217 18:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:03.217 18:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:03.217 18:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:03.217 18:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:03.217 18:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:03.217 18:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:03.217 18:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60960 00:10:03.217 18:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:03.217 Process raid pid: 60960 00:10:03.217 18:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60960' 00:10:03.217 18:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60960 00:10:03.217 18:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 60960 ']' 00:10:03.217 18:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.217 18:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:03.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.217 18:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.217 18:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:03.217 18:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.217 [2024-12-06 18:08:28.544561] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:10:03.217 [2024-12-06 18:08:28.544716] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:03.217 [2024-12-06 18:08:28.729609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.475 [2024-12-06 18:08:28.899603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.733 [2024-12-06 18:08:29.168429] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:03.733 [2024-12-06 18:08:29.168498] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:04.300 18:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:04.300 18:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:04.300 18:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:04.300 18:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.300 18:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.300 [2024-12-06 18:08:29.725247] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:04.300 [2024-12-06 18:08:29.725322] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:04.300 [2024-12-06 18:08:29.725339] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:04.300 [2024-12-06 18:08:29.725355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:04.300 18:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.300 18:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:04.300 18:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.300 18:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.300 18:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:04.300 18:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.300 18:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:04.300 18:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.300 18:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.300 18:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.300 18:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.300 18:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.300 18:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.300 18:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.300 18:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.300 18:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.300 18:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.300 "name": "Existed_Raid", 00:10:04.300 "uuid": "07de141b-5f94-4add-a8e6-7561e3c94d40", 00:10:04.300 "strip_size_kb": 64, 00:10:04.300 "state": "configuring", 00:10:04.300 "raid_level": "raid0", 00:10:04.300 "superblock": true, 00:10:04.300 "num_base_bdevs": 2, 00:10:04.300 "num_base_bdevs_discovered": 0, 00:10:04.300 "num_base_bdevs_operational": 2, 00:10:04.300 "base_bdevs_list": [ 00:10:04.300 { 00:10:04.300 "name": "BaseBdev1", 00:10:04.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.300 "is_configured": false, 00:10:04.300 "data_offset": 0, 00:10:04.300 "data_size": 0 00:10:04.300 }, 00:10:04.300 { 00:10:04.300 "name": "BaseBdev2", 00:10:04.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.300 "is_configured": false, 00:10:04.300 "data_offset": 0, 00:10:04.300 "data_size": 0 00:10:04.300 } 00:10:04.300 ] 00:10:04.300 }' 00:10:04.300 18:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.300 18:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.868 18:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:04.868 18:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.868 18:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.868 [2024-12-06 18:08:30.289370] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:04.868 [2024-12-06 18:08:30.289428] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:04.868 18:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.868 18:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:04.868 18:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.868 18:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.868 [2024-12-06 18:08:30.301318] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:04.868 [2024-12-06 18:08:30.301387] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:04.868 [2024-12-06 18:08:30.301407] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:04.868 [2024-12-06 18:08:30.301431] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:04.868 18:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.868 18:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:04.868 18:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.868 18:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.868 [2024-12-06 18:08:30.348116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:04.868 BaseBdev1 00:10:04.868 18:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.868 18:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:04.868 18:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:04.868 18:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:04.868 18:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:04.868 18:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:04.868 18:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:04.868 18:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:04.868 18:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.868 18:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.868 18:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.868 18:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:04.868 18:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.868 18:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.868 [ 00:10:04.868 { 00:10:04.868 "name": "BaseBdev1", 00:10:04.868 "aliases": [ 00:10:04.868 "d98b5235-8dd6-47d7-a371-673547eaa576" 00:10:04.868 ], 00:10:04.868 "product_name": "Malloc disk", 00:10:04.868 "block_size": 512, 00:10:04.868 "num_blocks": 65536, 00:10:04.868 "uuid": "d98b5235-8dd6-47d7-a371-673547eaa576", 00:10:04.868 "assigned_rate_limits": { 00:10:04.868 "rw_ios_per_sec": 0, 00:10:04.868 "rw_mbytes_per_sec": 0, 00:10:04.868 "r_mbytes_per_sec": 0, 00:10:04.868 "w_mbytes_per_sec": 0 00:10:04.868 }, 00:10:04.868 "claimed": true, 00:10:04.868 "claim_type": "exclusive_write", 00:10:04.868 "zoned": false, 00:10:04.868 "supported_io_types": { 00:10:04.868 "read": true, 00:10:04.868 "write": true, 00:10:04.868 "unmap": true, 00:10:04.868 "flush": true, 00:10:04.868 "reset": true, 00:10:04.868 "nvme_admin": false, 00:10:04.868 "nvme_io": false, 00:10:04.868 "nvme_io_md": false, 00:10:04.868 "write_zeroes": true, 00:10:04.868 "zcopy": true, 00:10:04.868 "get_zone_info": false, 00:10:04.868 "zone_management": false, 00:10:05.127 "zone_append": false, 00:10:05.127 "compare": false, 00:10:05.127 "compare_and_write": false, 00:10:05.127 "abort": true, 00:10:05.127 "seek_hole": false, 00:10:05.127 "seek_data": false, 00:10:05.127 "copy": true, 00:10:05.127 "nvme_iov_md": false 00:10:05.127 }, 00:10:05.127 "memory_domains": [ 00:10:05.127 { 00:10:05.127 "dma_device_id": "system", 00:10:05.127 "dma_device_type": 1 00:10:05.127 }, 00:10:05.127 { 00:10:05.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.127 "dma_device_type": 2 00:10:05.127 } 00:10:05.127 ], 00:10:05.127 "driver_specific": {} 00:10:05.127 } 00:10:05.127 ] 00:10:05.127 18:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.127 18:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:05.127 18:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:05.127 18:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.127 18:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.127 18:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:05.127 18:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.127 18:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:05.127 18:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.127 18:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.127 18:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.127 18:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.127 18:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.127 18:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.127 18:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.127 18:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.127 18:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.127 18:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.127 "name": "Existed_Raid", 00:10:05.127 "uuid": "3572acee-5e44-4c83-a55d-78443ad15d2e", 00:10:05.127 "strip_size_kb": 64, 00:10:05.127 "state": "configuring", 00:10:05.127 "raid_level": "raid0", 00:10:05.127 "superblock": true, 00:10:05.127 "num_base_bdevs": 2, 00:10:05.127 "num_base_bdevs_discovered": 1, 00:10:05.127 "num_base_bdevs_operational": 2, 00:10:05.127 "base_bdevs_list": [ 00:10:05.127 { 00:10:05.127 "name": "BaseBdev1", 00:10:05.127 "uuid": "d98b5235-8dd6-47d7-a371-673547eaa576", 00:10:05.127 "is_configured": true, 00:10:05.127 "data_offset": 2048, 00:10:05.127 "data_size": 63488 00:10:05.127 }, 00:10:05.127 { 00:10:05.127 "name": "BaseBdev2", 00:10:05.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.127 "is_configured": false, 00:10:05.127 "data_offset": 0, 00:10:05.127 "data_size": 0 00:10:05.127 } 00:10:05.127 ] 00:10:05.127 }' 00:10:05.127 18:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.127 18:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.695 18:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:05.695 18:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.695 18:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.695 [2024-12-06 18:08:30.912345] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:05.695 [2024-12-06 18:08:30.912417] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:05.695 18:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.695 18:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:05.695 18:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.696 18:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.696 [2024-12-06 18:08:30.920388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:05.696 [2024-12-06 18:08:30.922895] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:05.696 [2024-12-06 18:08:30.922949] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:05.696 18:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.696 18:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:05.696 18:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:05.696 18:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:05.696 18:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.696 18:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.696 18:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:05.696 18:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.696 18:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:05.696 18:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.696 18:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.696 18:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.696 18:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.696 18:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.696 18:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.696 18:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.696 18:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.696 18:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.696 18:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.696 "name": "Existed_Raid", 00:10:05.696 "uuid": "38f4ebb4-b454-4897-9a02-1209d1191378", 00:10:05.696 "strip_size_kb": 64, 00:10:05.696 "state": "configuring", 00:10:05.696 "raid_level": "raid0", 00:10:05.696 "superblock": true, 00:10:05.696 "num_base_bdevs": 2, 00:10:05.696 "num_base_bdevs_discovered": 1, 00:10:05.696 "num_base_bdevs_operational": 2, 00:10:05.696 "base_bdevs_list": [ 00:10:05.696 { 00:10:05.696 "name": "BaseBdev1", 00:10:05.696 "uuid": "d98b5235-8dd6-47d7-a371-673547eaa576", 00:10:05.696 "is_configured": true, 00:10:05.696 "data_offset": 2048, 00:10:05.696 "data_size": 63488 00:10:05.696 }, 00:10:05.696 { 00:10:05.696 "name": "BaseBdev2", 00:10:05.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.696 "is_configured": false, 00:10:05.696 "data_offset": 0, 00:10:05.696 "data_size": 0 00:10:05.696 } 00:10:05.696 ] 00:10:05.696 }' 00:10:05.696 18:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.696 18:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.955 18:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:05.955 18:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.955 18:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.955 [2024-12-06 18:08:31.472263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:05.955 [2024-12-06 18:08:31.472555] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:05.955 [2024-12-06 18:08:31.472586] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:05.955 [2024-12-06 18:08:31.472920] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:05.955 [2024-12-06 18:08:31.473122] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:05.955 [2024-12-06 18:08:31.473155] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:05.955 [2024-12-06 18:08:31.473419] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:06.215 BaseBdev2 00:10:06.215 18:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.215 18:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:06.215 18:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:06.215 18:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:06.215 18:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:06.215 18:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:06.215 18:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:06.215 18:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:06.215 18:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.215 18:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.215 18:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.215 18:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:06.215 18:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.215 18:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.215 [ 00:10:06.215 { 00:10:06.215 "name": "BaseBdev2", 00:10:06.215 "aliases": [ 00:10:06.215 "64385054-a404-4f86-b78a-89ed96fc37ef" 00:10:06.215 ], 00:10:06.215 "product_name": "Malloc disk", 00:10:06.215 "block_size": 512, 00:10:06.215 "num_blocks": 65536, 00:10:06.215 "uuid": "64385054-a404-4f86-b78a-89ed96fc37ef", 00:10:06.215 "assigned_rate_limits": { 00:10:06.215 "rw_ios_per_sec": 0, 00:10:06.215 "rw_mbytes_per_sec": 0, 00:10:06.215 "r_mbytes_per_sec": 0, 00:10:06.215 "w_mbytes_per_sec": 0 00:10:06.215 }, 00:10:06.215 "claimed": true, 00:10:06.215 "claim_type": "exclusive_write", 00:10:06.215 "zoned": false, 00:10:06.215 "supported_io_types": { 00:10:06.215 "read": true, 00:10:06.215 "write": true, 00:10:06.215 "unmap": true, 00:10:06.215 "flush": true, 00:10:06.215 "reset": true, 00:10:06.215 "nvme_admin": false, 00:10:06.215 "nvme_io": false, 00:10:06.215 "nvme_io_md": false, 00:10:06.215 "write_zeroes": true, 00:10:06.215 "zcopy": true, 00:10:06.215 "get_zone_info": false, 00:10:06.215 "zone_management": false, 00:10:06.215 "zone_append": false, 00:10:06.215 "compare": false, 00:10:06.215 "compare_and_write": false, 00:10:06.215 "abort": true, 00:10:06.215 "seek_hole": false, 00:10:06.215 "seek_data": false, 00:10:06.215 "copy": true, 00:10:06.215 "nvme_iov_md": false 00:10:06.215 }, 00:10:06.215 "memory_domains": [ 00:10:06.215 { 00:10:06.215 "dma_device_id": "system", 00:10:06.215 "dma_device_type": 1 00:10:06.215 }, 00:10:06.215 { 00:10:06.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.215 "dma_device_type": 2 00:10:06.215 } 00:10:06.215 ], 00:10:06.215 "driver_specific": {} 00:10:06.215 } 00:10:06.215 ] 00:10:06.215 18:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.215 18:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:06.215 18:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:06.215 18:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:06.215 18:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:10:06.215 18:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.215 18:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:06.215 18:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:06.215 18:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.215 18:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:06.215 18:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.215 18:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.215 18:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.215 18:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.215 18:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.215 18:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.215 18:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.215 18:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.215 18:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.215 18:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.215 "name": "Existed_Raid", 00:10:06.215 "uuid": "38f4ebb4-b454-4897-9a02-1209d1191378", 00:10:06.215 "strip_size_kb": 64, 00:10:06.215 "state": "online", 00:10:06.215 "raid_level": "raid0", 00:10:06.215 "superblock": true, 00:10:06.215 "num_base_bdevs": 2, 00:10:06.215 "num_base_bdevs_discovered": 2, 00:10:06.215 "num_base_bdevs_operational": 2, 00:10:06.215 "base_bdevs_list": [ 00:10:06.215 { 00:10:06.215 "name": "BaseBdev1", 00:10:06.215 "uuid": "d98b5235-8dd6-47d7-a371-673547eaa576", 00:10:06.215 "is_configured": true, 00:10:06.215 "data_offset": 2048, 00:10:06.215 "data_size": 63488 00:10:06.215 }, 00:10:06.215 { 00:10:06.215 "name": "BaseBdev2", 00:10:06.215 "uuid": "64385054-a404-4f86-b78a-89ed96fc37ef", 00:10:06.215 "is_configured": true, 00:10:06.215 "data_offset": 2048, 00:10:06.215 "data_size": 63488 00:10:06.215 } 00:10:06.215 ] 00:10:06.215 }' 00:10:06.215 18:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.215 18:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.784 18:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:06.784 18:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:06.784 18:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:06.784 18:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:06.784 18:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:06.784 18:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:06.784 18:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:06.784 18:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.784 18:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:06.784 18:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.784 [2024-12-06 18:08:32.036824] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:06.784 18:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.784 18:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:06.784 "name": "Existed_Raid", 00:10:06.784 "aliases": [ 00:10:06.784 "38f4ebb4-b454-4897-9a02-1209d1191378" 00:10:06.784 ], 00:10:06.784 "product_name": "Raid Volume", 00:10:06.784 "block_size": 512, 00:10:06.784 "num_blocks": 126976, 00:10:06.784 "uuid": "38f4ebb4-b454-4897-9a02-1209d1191378", 00:10:06.784 "assigned_rate_limits": { 00:10:06.784 "rw_ios_per_sec": 0, 00:10:06.784 "rw_mbytes_per_sec": 0, 00:10:06.784 "r_mbytes_per_sec": 0, 00:10:06.784 "w_mbytes_per_sec": 0 00:10:06.784 }, 00:10:06.784 "claimed": false, 00:10:06.784 "zoned": false, 00:10:06.784 "supported_io_types": { 00:10:06.784 "read": true, 00:10:06.784 "write": true, 00:10:06.784 "unmap": true, 00:10:06.784 "flush": true, 00:10:06.784 "reset": true, 00:10:06.784 "nvme_admin": false, 00:10:06.784 "nvme_io": false, 00:10:06.784 "nvme_io_md": false, 00:10:06.784 "write_zeroes": true, 00:10:06.784 "zcopy": false, 00:10:06.784 "get_zone_info": false, 00:10:06.784 "zone_management": false, 00:10:06.784 "zone_append": false, 00:10:06.784 "compare": false, 00:10:06.784 "compare_and_write": false, 00:10:06.784 "abort": false, 00:10:06.784 "seek_hole": false, 00:10:06.784 "seek_data": false, 00:10:06.784 "copy": false, 00:10:06.784 "nvme_iov_md": false 00:10:06.784 }, 00:10:06.784 "memory_domains": [ 00:10:06.784 { 00:10:06.784 "dma_device_id": "system", 00:10:06.784 "dma_device_type": 1 00:10:06.784 }, 00:10:06.784 { 00:10:06.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.784 "dma_device_type": 2 00:10:06.784 }, 00:10:06.784 { 00:10:06.784 "dma_device_id": "system", 00:10:06.784 "dma_device_type": 1 00:10:06.784 }, 00:10:06.784 { 00:10:06.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.784 "dma_device_type": 2 00:10:06.784 } 00:10:06.784 ], 00:10:06.784 "driver_specific": { 00:10:06.784 "raid": { 00:10:06.784 "uuid": "38f4ebb4-b454-4897-9a02-1209d1191378", 00:10:06.784 "strip_size_kb": 64, 00:10:06.784 "state": "online", 00:10:06.784 "raid_level": "raid0", 00:10:06.784 "superblock": true, 00:10:06.784 "num_base_bdevs": 2, 00:10:06.784 "num_base_bdevs_discovered": 2, 00:10:06.784 "num_base_bdevs_operational": 2, 00:10:06.784 "base_bdevs_list": [ 00:10:06.784 { 00:10:06.784 "name": "BaseBdev1", 00:10:06.785 "uuid": "d98b5235-8dd6-47d7-a371-673547eaa576", 00:10:06.785 "is_configured": true, 00:10:06.785 "data_offset": 2048, 00:10:06.785 "data_size": 63488 00:10:06.785 }, 00:10:06.785 { 00:10:06.785 "name": "BaseBdev2", 00:10:06.785 "uuid": "64385054-a404-4f86-b78a-89ed96fc37ef", 00:10:06.785 "is_configured": true, 00:10:06.785 "data_offset": 2048, 00:10:06.785 "data_size": 63488 00:10:06.785 } 00:10:06.785 ] 00:10:06.785 } 00:10:06.785 } 00:10:06.785 }' 00:10:06.785 18:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:06.785 18:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:06.785 BaseBdev2' 00:10:06.785 18:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.785 18:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:06.785 18:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.785 18:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.785 18:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:06.785 18:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.785 18:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.785 18:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.785 18:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.785 18:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.785 18:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.785 18:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:06.785 18:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.785 18:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.785 18:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.785 18:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.044 18:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.044 18:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.044 18:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:07.044 18:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.044 18:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.044 [2024-12-06 18:08:32.328589] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:07.044 [2024-12-06 18:08:32.328632] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:07.044 [2024-12-06 18:08:32.328694] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:07.044 18:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.044 18:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:07.044 18:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:07.044 18:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:07.044 18:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:07.044 18:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:07.044 18:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:10:07.044 18:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.044 18:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:07.044 18:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:07.044 18:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.044 18:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:07.044 18:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.044 18:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.044 18:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.044 18:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.044 18:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.044 18:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.044 18:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.044 18:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.044 18:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.044 18:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.044 "name": "Existed_Raid", 00:10:07.044 "uuid": "38f4ebb4-b454-4897-9a02-1209d1191378", 00:10:07.045 "strip_size_kb": 64, 00:10:07.045 "state": "offline", 00:10:07.045 "raid_level": "raid0", 00:10:07.045 "superblock": true, 00:10:07.045 "num_base_bdevs": 2, 00:10:07.045 "num_base_bdevs_discovered": 1, 00:10:07.045 "num_base_bdevs_operational": 1, 00:10:07.045 "base_bdevs_list": [ 00:10:07.045 { 00:10:07.045 "name": null, 00:10:07.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.045 "is_configured": false, 00:10:07.045 "data_offset": 0, 00:10:07.045 "data_size": 63488 00:10:07.045 }, 00:10:07.045 { 00:10:07.045 "name": "BaseBdev2", 00:10:07.045 "uuid": "64385054-a404-4f86-b78a-89ed96fc37ef", 00:10:07.045 "is_configured": true, 00:10:07.045 "data_offset": 2048, 00:10:07.045 "data_size": 63488 00:10:07.045 } 00:10:07.045 ] 00:10:07.045 }' 00:10:07.045 18:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.045 18:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.611 18:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:07.611 18:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:07.611 18:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:07.611 18:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.611 18:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.611 18:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.611 18:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.612 18:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:07.612 18:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:07.612 18:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:07.612 18:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.612 18:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.612 [2024-12-06 18:08:32.998986] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:07.612 [2024-12-06 18:08:32.999052] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:07.612 18:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.612 18:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:07.612 18:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:07.612 18:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.612 18:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:07.612 18:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.612 18:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.612 18:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.870 18:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:07.870 18:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:07.871 18:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:07.871 18:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60960 00:10:07.871 18:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 60960 ']' 00:10:07.871 18:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 60960 00:10:07.871 18:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:07.871 18:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:07.871 18:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60960 00:10:07.871 killing process with pid 60960 00:10:07.871 18:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:07.871 18:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:07.871 18:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60960' 00:10:07.871 18:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 60960 00:10:07.871 [2024-12-06 18:08:33.184351] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:07.871 18:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 60960 00:10:07.871 [2024-12-06 18:08:33.199871] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:09.244 ************************************ 00:10:09.244 END TEST raid_state_function_test_sb 00:10:09.244 ************************************ 00:10:09.244 18:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:09.244 00:10:09.244 real 0m5.950s 00:10:09.244 user 0m8.948s 00:10:09.244 sys 0m0.827s 00:10:09.244 18:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:09.244 18:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.244 18:08:34 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:10:09.244 18:08:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:09.244 18:08:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:09.244 18:08:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:09.244 ************************************ 00:10:09.244 START TEST raid_superblock_test 00:10:09.244 ************************************ 00:10:09.244 18:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:10:09.244 18:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:09.244 18:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:10:09.244 18:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:09.244 18:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:09.244 18:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:09.244 18:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:09.244 18:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:09.244 18:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:09.244 18:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:09.244 18:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:09.244 18:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:09.244 18:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:09.244 18:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:09.244 18:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:09.244 18:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:09.244 18:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:09.245 18:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:09.245 18:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61217 00:10:09.245 18:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61217 00:10:09.245 18:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61217 ']' 00:10:09.245 18:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.245 18:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:09.245 18:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.245 18:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:09.245 18:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.245 [2024-12-06 18:08:34.533044] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:10:09.245 [2024-12-06 18:08:34.534014] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61217 ] 00:10:09.245 [2024-12-06 18:08:34.715860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.503 [2024-12-06 18:08:34.847762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.761 [2024-12-06 18:08:35.054019] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:09.761 [2024-12-06 18:08:35.054115] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:10.018 18:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:10.018 18:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:10.018 18:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:10.018 18:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:10.018 18:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:10.019 18:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:10.019 18:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:10.019 18:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:10.019 18:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:10.019 18:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:10.019 18:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:10.019 18:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.019 18:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.019 malloc1 00:10:10.019 18:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.019 18:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:10.019 18:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.019 18:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.019 [2024-12-06 18:08:35.502121] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:10.019 [2024-12-06 18:08:35.502190] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:10.019 [2024-12-06 18:08:35.502221] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:10.019 [2024-12-06 18:08:35.502237] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:10.019 [2024-12-06 18:08:35.505033] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:10.019 [2024-12-06 18:08:35.505078] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:10.019 pt1 00:10:10.019 18:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.019 18:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:10.019 18:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:10.019 18:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:10.019 18:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:10.019 18:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:10.019 18:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:10.019 18:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:10.019 18:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:10.019 18:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:10.019 18:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.019 18:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.277 malloc2 00:10:10.277 18:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.277 18:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:10.277 18:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.277 18:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.277 [2024-12-06 18:08:35.558287] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:10.277 [2024-12-06 18:08:35.558516] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:10.277 [2024-12-06 18:08:35.558565] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:10.277 [2024-12-06 18:08:35.558582] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:10.277 [2024-12-06 18:08:35.561327] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:10.277 [2024-12-06 18:08:35.561370] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:10.277 pt2 00:10:10.277 18:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.277 18:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:10.277 18:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:10.277 18:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:10:10.277 18:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.277 18:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.277 [2024-12-06 18:08:35.570374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:10.277 [2024-12-06 18:08:35.572803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:10.277 [2024-12-06 18:08:35.573025] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:10.277 [2024-12-06 18:08:35.573044] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:10.277 [2024-12-06 18:08:35.573367] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:10.277 [2024-12-06 18:08:35.573559] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:10.277 [2024-12-06 18:08:35.573579] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:10.277 [2024-12-06 18:08:35.573795] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:10.277 18:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.277 18:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:10.277 18:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:10.277 18:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:10.277 18:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:10.278 18:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.278 18:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:10.278 18:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.278 18:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.278 18:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.278 18:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.278 18:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.278 18:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:10.278 18:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.278 18:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.278 18:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.278 18:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.278 "name": "raid_bdev1", 00:10:10.278 "uuid": "96837f1d-9bf7-4a34-8d54-c7a8c4d7b0dd", 00:10:10.278 "strip_size_kb": 64, 00:10:10.278 "state": "online", 00:10:10.278 "raid_level": "raid0", 00:10:10.278 "superblock": true, 00:10:10.278 "num_base_bdevs": 2, 00:10:10.278 "num_base_bdevs_discovered": 2, 00:10:10.278 "num_base_bdevs_operational": 2, 00:10:10.278 "base_bdevs_list": [ 00:10:10.278 { 00:10:10.278 "name": "pt1", 00:10:10.278 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:10.278 "is_configured": true, 00:10:10.278 "data_offset": 2048, 00:10:10.278 "data_size": 63488 00:10:10.278 }, 00:10:10.278 { 00:10:10.278 "name": "pt2", 00:10:10.278 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:10.278 "is_configured": true, 00:10:10.278 "data_offset": 2048, 00:10:10.278 "data_size": 63488 00:10:10.278 } 00:10:10.278 ] 00:10:10.278 }' 00:10:10.278 18:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.278 18:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.536 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:10.536 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:10.536 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:10.536 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:10.536 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:10.536 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:10.536 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:10.536 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:10.536 18:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.536 18:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.536 [2024-12-06 18:08:36.038817] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:10.795 18:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.795 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:10.795 "name": "raid_bdev1", 00:10:10.795 "aliases": [ 00:10:10.795 "96837f1d-9bf7-4a34-8d54-c7a8c4d7b0dd" 00:10:10.795 ], 00:10:10.795 "product_name": "Raid Volume", 00:10:10.795 "block_size": 512, 00:10:10.795 "num_blocks": 126976, 00:10:10.795 "uuid": "96837f1d-9bf7-4a34-8d54-c7a8c4d7b0dd", 00:10:10.795 "assigned_rate_limits": { 00:10:10.795 "rw_ios_per_sec": 0, 00:10:10.795 "rw_mbytes_per_sec": 0, 00:10:10.795 "r_mbytes_per_sec": 0, 00:10:10.795 "w_mbytes_per_sec": 0 00:10:10.795 }, 00:10:10.795 "claimed": false, 00:10:10.795 "zoned": false, 00:10:10.795 "supported_io_types": { 00:10:10.795 "read": true, 00:10:10.795 "write": true, 00:10:10.795 "unmap": true, 00:10:10.795 "flush": true, 00:10:10.795 "reset": true, 00:10:10.795 "nvme_admin": false, 00:10:10.795 "nvme_io": false, 00:10:10.795 "nvme_io_md": false, 00:10:10.795 "write_zeroes": true, 00:10:10.795 "zcopy": false, 00:10:10.795 "get_zone_info": false, 00:10:10.795 "zone_management": false, 00:10:10.795 "zone_append": false, 00:10:10.795 "compare": false, 00:10:10.795 "compare_and_write": false, 00:10:10.796 "abort": false, 00:10:10.796 "seek_hole": false, 00:10:10.796 "seek_data": false, 00:10:10.796 "copy": false, 00:10:10.796 "nvme_iov_md": false 00:10:10.796 }, 00:10:10.796 "memory_domains": [ 00:10:10.796 { 00:10:10.796 "dma_device_id": "system", 00:10:10.796 "dma_device_type": 1 00:10:10.796 }, 00:10:10.796 { 00:10:10.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.796 "dma_device_type": 2 00:10:10.796 }, 00:10:10.796 { 00:10:10.796 "dma_device_id": "system", 00:10:10.796 "dma_device_type": 1 00:10:10.796 }, 00:10:10.796 { 00:10:10.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.796 "dma_device_type": 2 00:10:10.796 } 00:10:10.796 ], 00:10:10.796 "driver_specific": { 00:10:10.796 "raid": { 00:10:10.796 "uuid": "96837f1d-9bf7-4a34-8d54-c7a8c4d7b0dd", 00:10:10.796 "strip_size_kb": 64, 00:10:10.796 "state": "online", 00:10:10.796 "raid_level": "raid0", 00:10:10.796 "superblock": true, 00:10:10.796 "num_base_bdevs": 2, 00:10:10.796 "num_base_bdevs_discovered": 2, 00:10:10.796 "num_base_bdevs_operational": 2, 00:10:10.796 "base_bdevs_list": [ 00:10:10.796 { 00:10:10.796 "name": "pt1", 00:10:10.796 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:10.796 "is_configured": true, 00:10:10.796 "data_offset": 2048, 00:10:10.796 "data_size": 63488 00:10:10.796 }, 00:10:10.796 { 00:10:10.796 "name": "pt2", 00:10:10.796 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:10.796 "is_configured": true, 00:10:10.796 "data_offset": 2048, 00:10:10.796 "data_size": 63488 00:10:10.796 } 00:10:10.796 ] 00:10:10.796 } 00:10:10.796 } 00:10:10.796 }' 00:10:10.796 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:10.796 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:10.796 pt2' 00:10:10.796 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.796 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:10.796 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.796 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:10.796 18:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.796 18:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.796 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.796 18:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.796 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.796 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.796 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.796 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.796 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:10.796 18:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.796 18:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.796 18:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.796 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.796 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.796 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:10.796 18:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.796 18:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.796 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:10.796 [2024-12-06 18:08:36.282869] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:10.796 18:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.054 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=96837f1d-9bf7-4a34-8d54-c7a8c4d7b0dd 00:10:11.054 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 96837f1d-9bf7-4a34-8d54-c7a8c4d7b0dd ']' 00:10:11.054 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:11.054 18:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.054 18:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.054 [2024-12-06 18:08:36.330509] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:11.054 [2024-12-06 18:08:36.330555] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:11.054 [2024-12-06 18:08:36.330716] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:11.054 [2024-12-06 18:08:36.330931] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:11.054 [2024-12-06 18:08:36.331013] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:11.054 18:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.054 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:11.054 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.054 18:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.054 18:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.054 18:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.054 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:11.054 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:11.054 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:11.054 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.055 [2024-12-06 18:08:36.454577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:11.055 [2024-12-06 18:08:36.457188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:11.055 [2024-12-06 18:08:36.457276] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:11.055 [2024-12-06 18:08:36.457352] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:11.055 [2024-12-06 18:08:36.457379] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:11.055 [2024-12-06 18:08:36.457397] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:11.055 request: 00:10:11.055 { 00:10:11.055 "name": "raid_bdev1", 00:10:11.055 "raid_level": "raid0", 00:10:11.055 "base_bdevs": [ 00:10:11.055 "malloc1", 00:10:11.055 "malloc2" 00:10:11.055 ], 00:10:11.055 "strip_size_kb": 64, 00:10:11.055 "superblock": false, 00:10:11.055 "method": "bdev_raid_create", 00:10:11.055 "req_id": 1 00:10:11.055 } 00:10:11.055 Got JSON-RPC error response 00:10:11.055 response: 00:10:11.055 { 00:10:11.055 "code": -17, 00:10:11.055 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:11.055 } 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.055 [2024-12-06 18:08:36.526574] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:11.055 [2024-12-06 18:08:36.526798] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:11.055 [2024-12-06 18:08:36.526870] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:11.055 [2024-12-06 18:08:36.527064] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:11.055 [2024-12-06 18:08:36.530048] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:11.055 [2024-12-06 18:08:36.530203] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:11.055 [2024-12-06 18:08:36.530421] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:11.055 [2024-12-06 18:08:36.530593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:11.055 pt1 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.055 18:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.315 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.315 "name": "raid_bdev1", 00:10:11.315 "uuid": "96837f1d-9bf7-4a34-8d54-c7a8c4d7b0dd", 00:10:11.315 "strip_size_kb": 64, 00:10:11.315 "state": "configuring", 00:10:11.315 "raid_level": "raid0", 00:10:11.315 "superblock": true, 00:10:11.315 "num_base_bdevs": 2, 00:10:11.315 "num_base_bdevs_discovered": 1, 00:10:11.315 "num_base_bdevs_operational": 2, 00:10:11.315 "base_bdevs_list": [ 00:10:11.315 { 00:10:11.315 "name": "pt1", 00:10:11.315 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:11.315 "is_configured": true, 00:10:11.315 "data_offset": 2048, 00:10:11.315 "data_size": 63488 00:10:11.315 }, 00:10:11.315 { 00:10:11.315 "name": null, 00:10:11.315 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:11.315 "is_configured": false, 00:10:11.315 "data_offset": 2048, 00:10:11.315 "data_size": 63488 00:10:11.315 } 00:10:11.315 ] 00:10:11.315 }' 00:10:11.315 18:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.315 18:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.573 18:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:10:11.573 18:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:11.573 18:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:11.573 18:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:11.573 18:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.573 18:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.573 [2024-12-06 18:08:37.063045] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:11.573 [2024-12-06 18:08:37.063284] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:11.573 [2024-12-06 18:08:37.063325] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:10:11.573 [2024-12-06 18:08:37.063345] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:11.573 [2024-12-06 18:08:37.063976] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:11.573 [2024-12-06 18:08:37.064008] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:11.573 [2024-12-06 18:08:37.064111] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:11.573 [2024-12-06 18:08:37.064154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:11.573 [2024-12-06 18:08:37.064295] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:11.573 [2024-12-06 18:08:37.064316] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:11.573 [2024-12-06 18:08:37.064622] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:11.573 [2024-12-06 18:08:37.064825] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:11.573 [2024-12-06 18:08:37.064840] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:11.573 [2024-12-06 18:08:37.065017] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:11.573 pt2 00:10:11.573 18:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.574 18:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:11.574 18:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:11.574 18:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:11.574 18:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:11.574 18:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:11.574 18:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:11.574 18:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.574 18:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:11.574 18:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.574 18:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.574 18:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.574 18:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.574 18:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.574 18:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.574 18:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.574 18:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:11.574 18:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.832 18:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.832 "name": "raid_bdev1", 00:10:11.832 "uuid": "96837f1d-9bf7-4a34-8d54-c7a8c4d7b0dd", 00:10:11.832 "strip_size_kb": 64, 00:10:11.832 "state": "online", 00:10:11.832 "raid_level": "raid0", 00:10:11.832 "superblock": true, 00:10:11.832 "num_base_bdevs": 2, 00:10:11.832 "num_base_bdevs_discovered": 2, 00:10:11.832 "num_base_bdevs_operational": 2, 00:10:11.832 "base_bdevs_list": [ 00:10:11.832 { 00:10:11.832 "name": "pt1", 00:10:11.832 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:11.832 "is_configured": true, 00:10:11.832 "data_offset": 2048, 00:10:11.832 "data_size": 63488 00:10:11.832 }, 00:10:11.832 { 00:10:11.832 "name": "pt2", 00:10:11.832 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:11.832 "is_configured": true, 00:10:11.832 "data_offset": 2048, 00:10:11.832 "data_size": 63488 00:10:11.832 } 00:10:11.832 ] 00:10:11.832 }' 00:10:11.832 18:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.832 18:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.091 18:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:12.091 18:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:12.091 18:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:12.091 18:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:12.091 18:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:12.091 18:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:12.091 18:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:12.091 18:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.091 18:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.091 18:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:12.091 [2024-12-06 18:08:37.595464] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:12.350 18:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.350 18:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:12.350 "name": "raid_bdev1", 00:10:12.350 "aliases": [ 00:10:12.350 "96837f1d-9bf7-4a34-8d54-c7a8c4d7b0dd" 00:10:12.350 ], 00:10:12.350 "product_name": "Raid Volume", 00:10:12.350 "block_size": 512, 00:10:12.350 "num_blocks": 126976, 00:10:12.350 "uuid": "96837f1d-9bf7-4a34-8d54-c7a8c4d7b0dd", 00:10:12.350 "assigned_rate_limits": { 00:10:12.350 "rw_ios_per_sec": 0, 00:10:12.350 "rw_mbytes_per_sec": 0, 00:10:12.350 "r_mbytes_per_sec": 0, 00:10:12.350 "w_mbytes_per_sec": 0 00:10:12.350 }, 00:10:12.350 "claimed": false, 00:10:12.350 "zoned": false, 00:10:12.350 "supported_io_types": { 00:10:12.350 "read": true, 00:10:12.350 "write": true, 00:10:12.350 "unmap": true, 00:10:12.350 "flush": true, 00:10:12.350 "reset": true, 00:10:12.350 "nvme_admin": false, 00:10:12.350 "nvme_io": false, 00:10:12.350 "nvme_io_md": false, 00:10:12.350 "write_zeroes": true, 00:10:12.350 "zcopy": false, 00:10:12.350 "get_zone_info": false, 00:10:12.350 "zone_management": false, 00:10:12.350 "zone_append": false, 00:10:12.350 "compare": false, 00:10:12.350 "compare_and_write": false, 00:10:12.350 "abort": false, 00:10:12.350 "seek_hole": false, 00:10:12.350 "seek_data": false, 00:10:12.350 "copy": false, 00:10:12.350 "nvme_iov_md": false 00:10:12.350 }, 00:10:12.350 "memory_domains": [ 00:10:12.350 { 00:10:12.350 "dma_device_id": "system", 00:10:12.350 "dma_device_type": 1 00:10:12.350 }, 00:10:12.350 { 00:10:12.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.350 "dma_device_type": 2 00:10:12.350 }, 00:10:12.350 { 00:10:12.350 "dma_device_id": "system", 00:10:12.350 "dma_device_type": 1 00:10:12.350 }, 00:10:12.350 { 00:10:12.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.350 "dma_device_type": 2 00:10:12.350 } 00:10:12.350 ], 00:10:12.350 "driver_specific": { 00:10:12.350 "raid": { 00:10:12.350 "uuid": "96837f1d-9bf7-4a34-8d54-c7a8c4d7b0dd", 00:10:12.350 "strip_size_kb": 64, 00:10:12.350 "state": "online", 00:10:12.350 "raid_level": "raid0", 00:10:12.350 "superblock": true, 00:10:12.350 "num_base_bdevs": 2, 00:10:12.350 "num_base_bdevs_discovered": 2, 00:10:12.350 "num_base_bdevs_operational": 2, 00:10:12.350 "base_bdevs_list": [ 00:10:12.350 { 00:10:12.350 "name": "pt1", 00:10:12.350 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:12.350 "is_configured": true, 00:10:12.350 "data_offset": 2048, 00:10:12.350 "data_size": 63488 00:10:12.350 }, 00:10:12.350 { 00:10:12.350 "name": "pt2", 00:10:12.350 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:12.350 "is_configured": true, 00:10:12.350 "data_offset": 2048, 00:10:12.350 "data_size": 63488 00:10:12.350 } 00:10:12.350 ] 00:10:12.350 } 00:10:12.350 } 00:10:12.350 }' 00:10:12.350 18:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:12.350 18:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:12.350 pt2' 00:10:12.350 18:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.350 18:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:12.350 18:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:12.350 18:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:12.350 18:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.350 18:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.350 18:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.350 18:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.350 18:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:12.350 18:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:12.350 18:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:12.350 18:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.350 18:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:12.350 18:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.350 18:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.350 18:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.350 18:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:12.350 18:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:12.350 18:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:12.350 18:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:12.350 18:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.350 18:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.350 [2024-12-06 18:08:37.863538] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:12.608 18:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.608 18:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 96837f1d-9bf7-4a34-8d54-c7a8c4d7b0dd '!=' 96837f1d-9bf7-4a34-8d54-c7a8c4d7b0dd ']' 00:10:12.608 18:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:12.608 18:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:12.608 18:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:12.608 18:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61217 00:10:12.608 18:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61217 ']' 00:10:12.608 18:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61217 00:10:12.608 18:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:12.608 18:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:12.608 18:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61217 00:10:12.608 killing process with pid 61217 00:10:12.608 18:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:12.608 18:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:12.608 18:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61217' 00:10:12.608 18:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61217 00:10:12.608 [2024-12-06 18:08:37.948017] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:12.608 18:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61217 00:10:12.608 [2024-12-06 18:08:37.948129] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:12.608 [2024-12-06 18:08:37.948200] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:12.608 [2024-12-06 18:08:37.948219] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:12.887 [2024-12-06 18:08:38.139555] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:13.823 18:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:13.823 00:10:13.823 real 0m4.759s 00:10:13.823 user 0m6.902s 00:10:13.823 sys 0m0.736s 00:10:13.823 18:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:13.823 18:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.823 ************************************ 00:10:13.823 END TEST raid_superblock_test 00:10:13.823 ************************************ 00:10:13.823 18:08:39 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:10:13.823 18:08:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:13.823 18:08:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:13.823 18:08:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:13.823 ************************************ 00:10:13.823 START TEST raid_read_error_test 00:10:13.823 ************************************ 00:10:13.823 18:08:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:10:13.823 18:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:13.823 18:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:13.823 18:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:13.823 18:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:13.823 18:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:13.823 18:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:13.823 18:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:13.823 18:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:13.823 18:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:13.823 18:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:13.823 18:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:13.823 18:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:13.823 18:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:13.823 18:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:13.823 18:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:13.823 18:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:13.823 18:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:13.823 18:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:13.823 18:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:13.823 18:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:13.823 18:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:13.823 18:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:13.823 18:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.RN6kUyW5y4 00:10:13.823 18:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61429 00:10:13.823 18:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61429 00:10:13.823 18:08:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61429 ']' 00:10:13.823 18:08:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.823 18:08:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:13.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.823 18:08:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.823 18:08:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:13.823 18:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:13.823 18:08:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.082 [2024-12-06 18:08:39.350805] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:10:14.083 [2024-12-06 18:08:39.350983] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61429 ] 00:10:14.083 [2024-12-06 18:08:39.526264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.346 [2024-12-06 18:08:39.672093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.604 [2024-12-06 18:08:39.885790] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:14.605 [2024-12-06 18:08:39.885878] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:15.171 18:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:15.171 18:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:15.171 18:08:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:15.171 18:08:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:15.171 18:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.171 18:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.171 BaseBdev1_malloc 00:10:15.171 18:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.171 18:08:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:15.171 18:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.171 18:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.171 true 00:10:15.171 18:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.171 18:08:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:15.171 18:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.171 18:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.171 [2024-12-06 18:08:40.472030] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:15.171 [2024-12-06 18:08:40.472256] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.171 [2024-12-06 18:08:40.472303] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:15.171 [2024-12-06 18:08:40.472327] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.171 [2024-12-06 18:08:40.475200] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.171 [2024-12-06 18:08:40.475258] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:15.171 BaseBdev1 00:10:15.171 18:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.171 18:08:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:15.171 18:08:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:15.171 18:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.171 18:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.171 BaseBdev2_malloc 00:10:15.171 18:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.171 18:08:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:15.171 18:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.171 18:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.171 true 00:10:15.171 18:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.171 18:08:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:15.171 18:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.171 18:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.171 [2024-12-06 18:08:40.536672] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:15.171 [2024-12-06 18:08:40.536757] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.171 [2024-12-06 18:08:40.536818] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:15.171 [2024-12-06 18:08:40.536843] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.171 [2024-12-06 18:08:40.539848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.171 [2024-12-06 18:08:40.539908] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:15.171 BaseBdev2 00:10:15.171 18:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.171 18:08:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:15.171 18:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.171 18:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.171 [2024-12-06 18:08:40.544882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:15.171 [2024-12-06 18:08:40.547625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:15.171 [2024-12-06 18:08:40.548092] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:15.171 [2024-12-06 18:08:40.548256] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:15.171 [2024-12-06 18:08:40.548678] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:15.171 [2024-12-06 18:08:40.549089] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:15.171 [2024-12-06 18:08:40.549242] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:15.171 [2024-12-06 18:08:40.549704] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:15.171 18:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.171 18:08:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:15.171 18:08:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:15.171 18:08:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:15.171 18:08:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:15.171 18:08:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.171 18:08:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:15.171 18:08:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.171 18:08:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.171 18:08:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.171 18:08:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.171 18:08:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:15.171 18:08:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.171 18:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.171 18:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.171 18:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.171 18:08:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.171 "name": "raid_bdev1", 00:10:15.171 "uuid": "167c4340-6000-4b7a-991d-743183053376", 00:10:15.171 "strip_size_kb": 64, 00:10:15.171 "state": "online", 00:10:15.171 "raid_level": "raid0", 00:10:15.171 "superblock": true, 00:10:15.171 "num_base_bdevs": 2, 00:10:15.171 "num_base_bdevs_discovered": 2, 00:10:15.171 "num_base_bdevs_operational": 2, 00:10:15.171 "base_bdevs_list": [ 00:10:15.171 { 00:10:15.171 "name": "BaseBdev1", 00:10:15.171 "uuid": "bc3b47c8-ebea-5417-b478-8739f0de6515", 00:10:15.171 "is_configured": true, 00:10:15.171 "data_offset": 2048, 00:10:15.171 "data_size": 63488 00:10:15.171 }, 00:10:15.171 { 00:10:15.171 "name": "BaseBdev2", 00:10:15.171 "uuid": "d5714aa2-74aa-5739-8f1d-624268d26a56", 00:10:15.171 "is_configured": true, 00:10:15.171 "data_offset": 2048, 00:10:15.171 "data_size": 63488 00:10:15.171 } 00:10:15.171 ] 00:10:15.171 }' 00:10:15.171 18:08:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.171 18:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.734 18:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:15.734 18:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:15.734 [2024-12-06 18:08:41.199426] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:16.668 18:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:16.668 18:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.668 18:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.668 18:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.668 18:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:16.668 18:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:16.668 18:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:10:16.668 18:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:16.668 18:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:16.668 18:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:16.668 18:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.668 18:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.668 18:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:16.668 18:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.668 18:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.668 18:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.668 18:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.668 18:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.668 18:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.668 18:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:16.668 18:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.668 18:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.668 18:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.668 "name": "raid_bdev1", 00:10:16.668 "uuid": "167c4340-6000-4b7a-991d-743183053376", 00:10:16.668 "strip_size_kb": 64, 00:10:16.668 "state": "online", 00:10:16.668 "raid_level": "raid0", 00:10:16.668 "superblock": true, 00:10:16.668 "num_base_bdevs": 2, 00:10:16.668 "num_base_bdevs_discovered": 2, 00:10:16.668 "num_base_bdevs_operational": 2, 00:10:16.668 "base_bdevs_list": [ 00:10:16.668 { 00:10:16.668 "name": "BaseBdev1", 00:10:16.668 "uuid": "bc3b47c8-ebea-5417-b478-8739f0de6515", 00:10:16.669 "is_configured": true, 00:10:16.669 "data_offset": 2048, 00:10:16.669 "data_size": 63488 00:10:16.669 }, 00:10:16.669 { 00:10:16.669 "name": "BaseBdev2", 00:10:16.669 "uuid": "d5714aa2-74aa-5739-8f1d-624268d26a56", 00:10:16.669 "is_configured": true, 00:10:16.669 "data_offset": 2048, 00:10:16.669 "data_size": 63488 00:10:16.669 } 00:10:16.669 ] 00:10:16.669 }' 00:10:16.669 18:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.669 18:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.236 18:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:17.236 18:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.236 18:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.236 [2024-12-06 18:08:42.564877] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:17.236 [2024-12-06 18:08:42.564925] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:17.236 [2024-12-06 18:08:42.568890] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:17.236 [2024-12-06 18:08:42.569175] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:17.236 [2024-12-06 18:08:42.569431] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:17.236 [2024-12-06 18:08:42.569616] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, sta{ 00:10:17.236 "results": [ 00:10:17.236 { 00:10:17.236 "job": "raid_bdev1", 00:10:17.236 "core_mask": "0x1", 00:10:17.236 "workload": "randrw", 00:10:17.236 "percentage": 50, 00:10:17.236 "status": "finished", 00:10:17.236 "queue_depth": 1, 00:10:17.236 "io_size": 131072, 00:10:17.236 "runtime": 1.362945, 00:10:17.236 "iops": 9390.694415401942, 00:10:17.236 "mibps": 1173.8368019252428, 00:10:17.236 "io_failed": 1, 00:10:17.236 "io_timeout": 0, 00:10:17.236 "avg_latency_us": 146.9516, 00:10:17.236 "min_latency_us": 42.35636363636364, 00:10:17.236 "max_latency_us": 1921.3963636363637 00:10:17.236 } 00:10:17.236 ], 00:10:17.236 "core_count": 1 00:10:17.236 } 00:10:17.236 te offline 00:10:17.236 18:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.236 18:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61429 00:10:17.236 18:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61429 ']' 00:10:17.236 18:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61429 00:10:17.236 18:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:17.236 18:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:17.236 18:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61429 00:10:17.236 killing process with pid 61429 00:10:17.236 18:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:17.236 18:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:17.236 18:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61429' 00:10:17.236 18:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61429 00:10:17.236 [2024-12-06 18:08:42.615170] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:17.236 18:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61429 00:10:17.236 [2024-12-06 18:08:42.744865] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:18.670 18:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.RN6kUyW5y4 00:10:18.671 18:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:18.671 18:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:18.671 ************************************ 00:10:18.671 END TEST raid_read_error_test 00:10:18.671 ************************************ 00:10:18.671 18:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:10:18.671 18:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:18.671 18:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:18.671 18:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:18.671 18:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:10:18.671 00:10:18.671 real 0m4.642s 00:10:18.671 user 0m5.859s 00:10:18.671 sys 0m0.550s 00:10:18.671 18:08:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.671 18:08:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.671 18:08:43 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:10:18.671 18:08:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:18.671 18:08:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.671 18:08:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:18.671 ************************************ 00:10:18.671 START TEST raid_write_error_test 00:10:18.671 ************************************ 00:10:18.671 18:08:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:10:18.671 18:08:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:18.671 18:08:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:18.671 18:08:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:18.671 18:08:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:18.671 18:08:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:18.671 18:08:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:18.671 18:08:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:18.671 18:08:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:18.671 18:08:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:18.671 18:08:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:18.671 18:08:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:18.671 18:08:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:18.671 18:08:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:18.671 18:08:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:18.671 18:08:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:18.671 18:08:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:18.671 18:08:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:18.671 18:08:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:18.671 18:08:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:18.671 18:08:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:18.671 18:08:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:18.671 18:08:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:18.671 18:08:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.AWv2kWNI5C 00:10:18.671 18:08:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61580 00:10:18.671 18:08:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61580 00:10:18.671 18:08:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:18.671 18:08:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61580 ']' 00:10:18.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.671 18:08:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.671 18:08:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:18.671 18:08:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.671 18:08:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:18.671 18:08:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.671 [2024-12-06 18:08:44.043801] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:10:18.671 [2024-12-06 18:08:44.043978] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61580 ] 00:10:18.929 [2024-12-06 18:08:44.222649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.929 [2024-12-06 18:08:44.355322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.188 [2024-12-06 18:08:44.565868] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:19.188 [2024-12-06 18:08:44.565935] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:19.757 18:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:19.757 18:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:19.757 18:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:19.757 18:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:19.757 18:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.757 18:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.757 BaseBdev1_malloc 00:10:19.757 18:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.757 18:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:19.757 18:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.757 18:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.757 true 00:10:19.757 18:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.757 18:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:19.757 18:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.757 18:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.757 [2024-12-06 18:08:45.135239] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:19.757 [2024-12-06 18:08:45.136382] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.757 [2024-12-06 18:08:45.136435] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:19.757 [2024-12-06 18:08:45.136460] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.757 [2024-12-06 18:08:45.139589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.757 [2024-12-06 18:08:45.139806] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:19.757 BaseBdev1 00:10:19.757 18:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.757 18:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:19.757 18:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:19.757 18:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.757 18:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.757 BaseBdev2_malloc 00:10:19.757 18:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.757 18:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:19.757 18:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.757 18:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.757 true 00:10:19.757 18:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.757 18:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:19.757 18:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.757 18:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.757 [2024-12-06 18:08:45.205713] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:19.757 [2024-12-06 18:08:45.205850] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.757 [2024-12-06 18:08:45.205886] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:19.757 [2024-12-06 18:08:45.205909] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.757 [2024-12-06 18:08:45.209208] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.757 [2024-12-06 18:08:45.209265] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:19.758 BaseBdev2 00:10:19.758 18:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.758 18:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:19.758 18:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.758 18:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.758 [2024-12-06 18:08:45.218081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:19.758 [2024-12-06 18:08:45.220788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:19.758 [2024-12-06 18:08:45.221170] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:19.758 [2024-12-06 18:08:45.221203] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:19.758 [2024-12-06 18:08:45.221573] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:19.758 [2024-12-06 18:08:45.221951] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:19.758 [2024-12-06 18:08:45.221974] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:19.758 [2024-12-06 18:08:45.222266] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:19.758 18:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.758 18:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:19.758 18:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:19.758 18:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:19.758 18:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:19.758 18:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.758 18:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:19.758 18:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.758 18:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.758 18:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.758 18:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.758 18:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.758 18:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.758 18:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.758 18:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:19.758 18:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.015 18:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.015 "name": "raid_bdev1", 00:10:20.015 "uuid": "13a0b8fc-d5c1-4ecb-bb71-f66516b17a14", 00:10:20.015 "strip_size_kb": 64, 00:10:20.015 "state": "online", 00:10:20.015 "raid_level": "raid0", 00:10:20.015 "superblock": true, 00:10:20.015 "num_base_bdevs": 2, 00:10:20.015 "num_base_bdevs_discovered": 2, 00:10:20.015 "num_base_bdevs_operational": 2, 00:10:20.015 "base_bdevs_list": [ 00:10:20.015 { 00:10:20.015 "name": "BaseBdev1", 00:10:20.015 "uuid": "5de85b3c-e754-5c2c-8805-22db837e0fbd", 00:10:20.015 "is_configured": true, 00:10:20.015 "data_offset": 2048, 00:10:20.015 "data_size": 63488 00:10:20.015 }, 00:10:20.015 { 00:10:20.015 "name": "BaseBdev2", 00:10:20.015 "uuid": "dd5e2c94-670c-559c-ad49-a07ac3c80586", 00:10:20.015 "is_configured": true, 00:10:20.015 "data_offset": 2048, 00:10:20.015 "data_size": 63488 00:10:20.015 } 00:10:20.015 ] 00:10:20.015 }' 00:10:20.015 18:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.015 18:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.274 18:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:20.274 18:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:20.533 [2024-12-06 18:08:45.895876] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:21.469 18:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:21.469 18:08:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.469 18:08:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.469 18:08:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.469 18:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:21.469 18:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:21.469 18:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:10:21.469 18:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:21.469 18:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:21.469 18:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:21.469 18:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:21.469 18:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.469 18:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:21.469 18:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.469 18:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.469 18:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.469 18:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.469 18:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.469 18:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.469 18:08:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.469 18:08:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.469 18:08:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.469 18:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.469 "name": "raid_bdev1", 00:10:21.469 "uuid": "13a0b8fc-d5c1-4ecb-bb71-f66516b17a14", 00:10:21.469 "strip_size_kb": 64, 00:10:21.469 "state": "online", 00:10:21.469 "raid_level": "raid0", 00:10:21.469 "superblock": true, 00:10:21.469 "num_base_bdevs": 2, 00:10:21.469 "num_base_bdevs_discovered": 2, 00:10:21.469 "num_base_bdevs_operational": 2, 00:10:21.469 "base_bdevs_list": [ 00:10:21.469 { 00:10:21.469 "name": "BaseBdev1", 00:10:21.469 "uuid": "5de85b3c-e754-5c2c-8805-22db837e0fbd", 00:10:21.469 "is_configured": true, 00:10:21.469 "data_offset": 2048, 00:10:21.469 "data_size": 63488 00:10:21.469 }, 00:10:21.469 { 00:10:21.469 "name": "BaseBdev2", 00:10:21.469 "uuid": "dd5e2c94-670c-559c-ad49-a07ac3c80586", 00:10:21.469 "is_configured": true, 00:10:21.469 "data_offset": 2048, 00:10:21.469 "data_size": 63488 00:10:21.469 } 00:10:21.469 ] 00:10:21.469 }' 00:10:21.470 18:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.470 18:08:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.039 18:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:22.039 18:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.039 18:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.039 [2024-12-06 18:08:47.281847] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:22.039 [2024-12-06 18:08:47.281895] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:22.039 [2024-12-06 18:08:47.285399] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:22.039 [2024-12-06 18:08:47.285473] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:22.039 [2024-12-06 18:08:47.285524] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:22.039 [2024-12-06 18:08:47.285547] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:22.039 { 00:10:22.039 "results": [ 00:10:22.039 { 00:10:22.039 "job": "raid_bdev1", 00:10:22.039 "core_mask": "0x1", 00:10:22.039 "workload": "randrw", 00:10:22.039 "percentage": 50, 00:10:22.039 "status": "finished", 00:10:22.039 "queue_depth": 1, 00:10:22.039 "io_size": 131072, 00:10:22.039 "runtime": 1.38327, 00:10:22.039 "iops": 10086.244912417676, 00:10:22.039 "mibps": 1260.7806140522096, 00:10:22.039 "io_failed": 1, 00:10:22.039 "io_timeout": 0, 00:10:22.039 "avg_latency_us": 137.73695015083103, 00:10:22.039 "min_latency_us": 43.054545454545455, 00:10:22.039 "max_latency_us": 1906.5018181818182 00:10:22.039 } 00:10:22.039 ], 00:10:22.039 "core_count": 1 00:10:22.039 } 00:10:22.039 18:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.039 18:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61580 00:10:22.039 18:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61580 ']' 00:10:22.039 18:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61580 00:10:22.039 18:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:22.039 18:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:22.039 18:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61580 00:10:22.039 killing process with pid 61580 00:10:22.039 18:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:22.039 18:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:22.039 18:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61580' 00:10:22.039 18:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61580 00:10:22.039 [2024-12-06 18:08:47.320238] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:22.039 18:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61580 00:10:22.039 [2024-12-06 18:08:47.443342] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:23.415 18:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.AWv2kWNI5C 00:10:23.415 18:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:23.415 18:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:23.415 18:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:23.415 18:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:23.415 ************************************ 00:10:23.415 END TEST raid_write_error_test 00:10:23.415 ************************************ 00:10:23.415 18:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:23.415 18:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:23.415 18:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:23.415 00:10:23.415 real 0m4.818s 00:10:23.415 user 0m5.990s 00:10:23.415 sys 0m0.568s 00:10:23.415 18:08:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:23.415 18:08:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.415 18:08:48 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:23.415 18:08:48 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:10:23.415 18:08:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:23.415 18:08:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:23.415 18:08:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:23.415 ************************************ 00:10:23.415 START TEST raid_state_function_test 00:10:23.415 ************************************ 00:10:23.415 18:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:10:23.415 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:23.415 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:23.415 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:23.415 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:23.415 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:23.415 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:23.415 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:23.415 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:23.415 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:23.415 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:23.415 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:23.415 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:23.416 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:23.416 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:23.416 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:23.416 Process raid pid: 61724 00:10:23.416 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:23.416 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:23.416 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:23.416 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:23.416 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:23.416 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:23.416 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:23.416 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:23.416 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61724 00:10:23.416 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61724' 00:10:23.416 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:23.416 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61724 00:10:23.416 18:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61724 ']' 00:10:23.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.416 18:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.416 18:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:23.416 18:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.416 18:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:23.416 18:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.416 [2024-12-06 18:08:48.907565] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:10:23.416 [2024-12-06 18:08:48.908140] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:23.673 [2024-12-06 18:08:49.097217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.929 [2024-12-06 18:08:49.278060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.186 [2024-12-06 18:08:49.530980] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.186 [2024-12-06 18:08:49.531049] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.443 18:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:24.443 18:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:24.443 18:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:24.443 18:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.443 18:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.443 [2024-12-06 18:08:49.933715] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:24.443 [2024-12-06 18:08:49.933815] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:24.443 [2024-12-06 18:08:49.933838] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:24.443 [2024-12-06 18:08:49.933860] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:24.443 18:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.443 18:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:24.443 18:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.443 18:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.443 18:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:24.443 18:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.443 18:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:24.443 18:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.443 18:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.443 18:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.443 18:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.443 18:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.443 18:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.443 18:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.443 18:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.443 18:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.699 18:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.699 "name": "Existed_Raid", 00:10:24.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.699 "strip_size_kb": 64, 00:10:24.699 "state": "configuring", 00:10:24.699 "raid_level": "concat", 00:10:24.699 "superblock": false, 00:10:24.699 "num_base_bdevs": 2, 00:10:24.699 "num_base_bdevs_discovered": 0, 00:10:24.699 "num_base_bdevs_operational": 2, 00:10:24.699 "base_bdevs_list": [ 00:10:24.699 { 00:10:24.699 "name": "BaseBdev1", 00:10:24.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.699 "is_configured": false, 00:10:24.699 "data_offset": 0, 00:10:24.699 "data_size": 0 00:10:24.699 }, 00:10:24.699 { 00:10:24.699 "name": "BaseBdev2", 00:10:24.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.699 "is_configured": false, 00:10:24.699 "data_offset": 0, 00:10:24.699 "data_size": 0 00:10:24.699 } 00:10:24.699 ] 00:10:24.699 }' 00:10:24.699 18:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.699 18:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.957 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:24.957 18:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.957 18:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.957 [2024-12-06 18:08:50.461791] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:24.957 [2024-12-06 18:08:50.462257] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:24.957 18:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.957 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:24.957 18:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.957 18:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.957 [2024-12-06 18:08:50.469789] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:24.957 [2024-12-06 18:08:50.469857] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:24.957 [2024-12-06 18:08:50.469876] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:24.957 [2024-12-06 18:08:50.469899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:24.957 18:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.957 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:24.957 18:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.957 18:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.215 [2024-12-06 18:08:50.522915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:25.215 BaseBdev1 00:10:25.215 18:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.215 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:25.215 18:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:25.215 18:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:25.215 18:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:25.215 18:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:25.215 18:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:25.215 18:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:25.215 18:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.215 18:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.215 18:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.215 18:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:25.215 18:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.215 18:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.215 [ 00:10:25.215 { 00:10:25.215 "name": "BaseBdev1", 00:10:25.215 "aliases": [ 00:10:25.215 "0704cab7-92c2-4633-bad5-8a072e9cf173" 00:10:25.215 ], 00:10:25.215 "product_name": "Malloc disk", 00:10:25.215 "block_size": 512, 00:10:25.215 "num_blocks": 65536, 00:10:25.215 "uuid": "0704cab7-92c2-4633-bad5-8a072e9cf173", 00:10:25.215 "assigned_rate_limits": { 00:10:25.215 "rw_ios_per_sec": 0, 00:10:25.215 "rw_mbytes_per_sec": 0, 00:10:25.215 "r_mbytes_per_sec": 0, 00:10:25.215 "w_mbytes_per_sec": 0 00:10:25.215 }, 00:10:25.215 "claimed": true, 00:10:25.215 "claim_type": "exclusive_write", 00:10:25.215 "zoned": false, 00:10:25.215 "supported_io_types": { 00:10:25.215 "read": true, 00:10:25.215 "write": true, 00:10:25.215 "unmap": true, 00:10:25.215 "flush": true, 00:10:25.215 "reset": true, 00:10:25.215 "nvme_admin": false, 00:10:25.215 "nvme_io": false, 00:10:25.215 "nvme_io_md": false, 00:10:25.215 "write_zeroes": true, 00:10:25.215 "zcopy": true, 00:10:25.215 "get_zone_info": false, 00:10:25.215 "zone_management": false, 00:10:25.215 "zone_append": false, 00:10:25.215 "compare": false, 00:10:25.215 "compare_and_write": false, 00:10:25.215 "abort": true, 00:10:25.215 "seek_hole": false, 00:10:25.215 "seek_data": false, 00:10:25.215 "copy": true, 00:10:25.215 "nvme_iov_md": false 00:10:25.215 }, 00:10:25.215 "memory_domains": [ 00:10:25.215 { 00:10:25.215 "dma_device_id": "system", 00:10:25.215 "dma_device_type": 1 00:10:25.215 }, 00:10:25.215 { 00:10:25.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.215 "dma_device_type": 2 00:10:25.215 } 00:10:25.215 ], 00:10:25.215 "driver_specific": {} 00:10:25.215 } 00:10:25.215 ] 00:10:25.215 18:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.215 18:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:25.215 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:25.215 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.215 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.215 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:25.215 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.215 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:25.215 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.215 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.215 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.215 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.215 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.215 18:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.215 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.215 18:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.215 18:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.215 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.215 "name": "Existed_Raid", 00:10:25.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.215 "strip_size_kb": 64, 00:10:25.215 "state": "configuring", 00:10:25.215 "raid_level": "concat", 00:10:25.215 "superblock": false, 00:10:25.215 "num_base_bdevs": 2, 00:10:25.215 "num_base_bdevs_discovered": 1, 00:10:25.215 "num_base_bdevs_operational": 2, 00:10:25.215 "base_bdevs_list": [ 00:10:25.215 { 00:10:25.215 "name": "BaseBdev1", 00:10:25.215 "uuid": "0704cab7-92c2-4633-bad5-8a072e9cf173", 00:10:25.215 "is_configured": true, 00:10:25.215 "data_offset": 0, 00:10:25.215 "data_size": 65536 00:10:25.215 }, 00:10:25.215 { 00:10:25.215 "name": "BaseBdev2", 00:10:25.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.215 "is_configured": false, 00:10:25.215 "data_offset": 0, 00:10:25.215 "data_size": 0 00:10:25.215 } 00:10:25.215 ] 00:10:25.215 }' 00:10:25.215 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.215 18:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.781 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:25.781 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.781 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.781 [2024-12-06 18:08:51.059162] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:25.781 [2024-12-06 18:08:51.059386] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:25.781 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.781 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:25.781 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.781 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.781 [2024-12-06 18:08:51.067150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:25.781 [2024-12-06 18:08:51.069700] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:25.781 [2024-12-06 18:08:51.069761] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:25.782 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.782 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:25.782 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:25.782 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:25.782 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.782 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.782 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:25.782 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.782 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:25.782 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.782 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.782 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.782 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.782 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.782 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.782 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.782 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.782 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.782 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.782 "name": "Existed_Raid", 00:10:25.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.782 "strip_size_kb": 64, 00:10:25.782 "state": "configuring", 00:10:25.782 "raid_level": "concat", 00:10:25.782 "superblock": false, 00:10:25.782 "num_base_bdevs": 2, 00:10:25.782 "num_base_bdevs_discovered": 1, 00:10:25.782 "num_base_bdevs_operational": 2, 00:10:25.782 "base_bdevs_list": [ 00:10:25.782 { 00:10:25.782 "name": "BaseBdev1", 00:10:25.782 "uuid": "0704cab7-92c2-4633-bad5-8a072e9cf173", 00:10:25.782 "is_configured": true, 00:10:25.782 "data_offset": 0, 00:10:25.782 "data_size": 65536 00:10:25.782 }, 00:10:25.782 { 00:10:25.782 "name": "BaseBdev2", 00:10:25.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.782 "is_configured": false, 00:10:25.782 "data_offset": 0, 00:10:25.782 "data_size": 0 00:10:25.782 } 00:10:25.782 ] 00:10:25.782 }' 00:10:25.782 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.782 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.040 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:26.040 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.040 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.298 [2024-12-06 18:08:51.597968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:26.298 [2024-12-06 18:08:51.598039] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:26.298 [2024-12-06 18:08:51.598051] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:26.298 [2024-12-06 18:08:51.598394] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:26.298 [2024-12-06 18:08:51.598615] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:26.298 [2024-12-06 18:08:51.598635] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:26.298 [2024-12-06 18:08:51.599022] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:26.298 BaseBdev2 00:10:26.298 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.298 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:26.298 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:26.298 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:26.298 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:26.298 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:26.298 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:26.298 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:26.298 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.298 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.298 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.298 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:26.298 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.298 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.298 [ 00:10:26.298 { 00:10:26.298 "name": "BaseBdev2", 00:10:26.298 "aliases": [ 00:10:26.298 "50642126-018a-4ff3-a26d-e64d806102f4" 00:10:26.298 ], 00:10:26.298 "product_name": "Malloc disk", 00:10:26.298 "block_size": 512, 00:10:26.298 "num_blocks": 65536, 00:10:26.298 "uuid": "50642126-018a-4ff3-a26d-e64d806102f4", 00:10:26.298 "assigned_rate_limits": { 00:10:26.298 "rw_ios_per_sec": 0, 00:10:26.298 "rw_mbytes_per_sec": 0, 00:10:26.298 "r_mbytes_per_sec": 0, 00:10:26.298 "w_mbytes_per_sec": 0 00:10:26.298 }, 00:10:26.298 "claimed": true, 00:10:26.298 "claim_type": "exclusive_write", 00:10:26.298 "zoned": false, 00:10:26.298 "supported_io_types": { 00:10:26.298 "read": true, 00:10:26.298 "write": true, 00:10:26.298 "unmap": true, 00:10:26.298 "flush": true, 00:10:26.298 "reset": true, 00:10:26.298 "nvme_admin": false, 00:10:26.298 "nvme_io": false, 00:10:26.298 "nvme_io_md": false, 00:10:26.298 "write_zeroes": true, 00:10:26.298 "zcopy": true, 00:10:26.298 "get_zone_info": false, 00:10:26.298 "zone_management": false, 00:10:26.298 "zone_append": false, 00:10:26.298 "compare": false, 00:10:26.298 "compare_and_write": false, 00:10:26.298 "abort": true, 00:10:26.298 "seek_hole": false, 00:10:26.298 "seek_data": false, 00:10:26.298 "copy": true, 00:10:26.298 "nvme_iov_md": false 00:10:26.298 }, 00:10:26.298 "memory_domains": [ 00:10:26.298 { 00:10:26.298 "dma_device_id": "system", 00:10:26.298 "dma_device_type": 1 00:10:26.298 }, 00:10:26.298 { 00:10:26.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.298 "dma_device_type": 2 00:10:26.298 } 00:10:26.298 ], 00:10:26.298 "driver_specific": {} 00:10:26.298 } 00:10:26.298 ] 00:10:26.298 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.298 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:26.298 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:26.298 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:26.298 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:10:26.298 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.298 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:26.298 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:26.298 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.298 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:26.298 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.298 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.298 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.298 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.298 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.298 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.298 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.298 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.298 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.298 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.298 "name": "Existed_Raid", 00:10:26.298 "uuid": "706bf96e-3752-48b9-9053-a4a72ef6dd98", 00:10:26.298 "strip_size_kb": 64, 00:10:26.298 "state": "online", 00:10:26.298 "raid_level": "concat", 00:10:26.298 "superblock": false, 00:10:26.298 "num_base_bdevs": 2, 00:10:26.298 "num_base_bdevs_discovered": 2, 00:10:26.298 "num_base_bdevs_operational": 2, 00:10:26.298 "base_bdevs_list": [ 00:10:26.298 { 00:10:26.298 "name": "BaseBdev1", 00:10:26.298 "uuid": "0704cab7-92c2-4633-bad5-8a072e9cf173", 00:10:26.298 "is_configured": true, 00:10:26.298 "data_offset": 0, 00:10:26.298 "data_size": 65536 00:10:26.298 }, 00:10:26.298 { 00:10:26.298 "name": "BaseBdev2", 00:10:26.298 "uuid": "50642126-018a-4ff3-a26d-e64d806102f4", 00:10:26.298 "is_configured": true, 00:10:26.298 "data_offset": 0, 00:10:26.298 "data_size": 65536 00:10:26.298 } 00:10:26.298 ] 00:10:26.298 }' 00:10:26.298 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.298 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.555 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:26.555 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:26.555 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:26.555 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:26.555 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:26.555 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:26.555 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:26.555 18:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.555 18:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.555 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:26.555 [2024-12-06 18:08:52.070543] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:26.813 18:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.813 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:26.813 "name": "Existed_Raid", 00:10:26.813 "aliases": [ 00:10:26.813 "706bf96e-3752-48b9-9053-a4a72ef6dd98" 00:10:26.813 ], 00:10:26.813 "product_name": "Raid Volume", 00:10:26.813 "block_size": 512, 00:10:26.813 "num_blocks": 131072, 00:10:26.813 "uuid": "706bf96e-3752-48b9-9053-a4a72ef6dd98", 00:10:26.813 "assigned_rate_limits": { 00:10:26.813 "rw_ios_per_sec": 0, 00:10:26.813 "rw_mbytes_per_sec": 0, 00:10:26.813 "r_mbytes_per_sec": 0, 00:10:26.813 "w_mbytes_per_sec": 0 00:10:26.813 }, 00:10:26.813 "claimed": false, 00:10:26.813 "zoned": false, 00:10:26.813 "supported_io_types": { 00:10:26.813 "read": true, 00:10:26.813 "write": true, 00:10:26.813 "unmap": true, 00:10:26.813 "flush": true, 00:10:26.813 "reset": true, 00:10:26.813 "nvme_admin": false, 00:10:26.813 "nvme_io": false, 00:10:26.813 "nvme_io_md": false, 00:10:26.813 "write_zeroes": true, 00:10:26.813 "zcopy": false, 00:10:26.813 "get_zone_info": false, 00:10:26.813 "zone_management": false, 00:10:26.813 "zone_append": false, 00:10:26.813 "compare": false, 00:10:26.813 "compare_and_write": false, 00:10:26.813 "abort": false, 00:10:26.813 "seek_hole": false, 00:10:26.813 "seek_data": false, 00:10:26.813 "copy": false, 00:10:26.813 "nvme_iov_md": false 00:10:26.813 }, 00:10:26.813 "memory_domains": [ 00:10:26.813 { 00:10:26.813 "dma_device_id": "system", 00:10:26.813 "dma_device_type": 1 00:10:26.813 }, 00:10:26.813 { 00:10:26.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.813 "dma_device_type": 2 00:10:26.813 }, 00:10:26.813 { 00:10:26.813 "dma_device_id": "system", 00:10:26.813 "dma_device_type": 1 00:10:26.813 }, 00:10:26.813 { 00:10:26.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.813 "dma_device_type": 2 00:10:26.813 } 00:10:26.813 ], 00:10:26.813 "driver_specific": { 00:10:26.813 "raid": { 00:10:26.813 "uuid": "706bf96e-3752-48b9-9053-a4a72ef6dd98", 00:10:26.813 "strip_size_kb": 64, 00:10:26.813 "state": "online", 00:10:26.813 "raid_level": "concat", 00:10:26.813 "superblock": false, 00:10:26.813 "num_base_bdevs": 2, 00:10:26.813 "num_base_bdevs_discovered": 2, 00:10:26.813 "num_base_bdevs_operational": 2, 00:10:26.813 "base_bdevs_list": [ 00:10:26.813 { 00:10:26.813 "name": "BaseBdev1", 00:10:26.813 "uuid": "0704cab7-92c2-4633-bad5-8a072e9cf173", 00:10:26.813 "is_configured": true, 00:10:26.813 "data_offset": 0, 00:10:26.813 "data_size": 65536 00:10:26.813 }, 00:10:26.813 { 00:10:26.813 "name": "BaseBdev2", 00:10:26.813 "uuid": "50642126-018a-4ff3-a26d-e64d806102f4", 00:10:26.813 "is_configured": true, 00:10:26.813 "data_offset": 0, 00:10:26.813 "data_size": 65536 00:10:26.813 } 00:10:26.813 ] 00:10:26.813 } 00:10:26.813 } 00:10:26.813 }' 00:10:26.813 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:26.813 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:26.813 BaseBdev2' 00:10:26.813 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.813 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:26.813 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:26.813 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:26.813 18:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.813 18:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.813 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.813 18:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.813 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.813 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.813 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:26.813 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:26.813 18:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.813 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.813 18:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.813 18:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.813 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.813 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.813 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:26.813 18:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.813 18:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.813 [2024-12-06 18:08:52.302298] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:26.813 [2024-12-06 18:08:52.302342] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:26.813 [2024-12-06 18:08:52.302408] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:27.080 18:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.080 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:27.080 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:27.080 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:27.080 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:27.080 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:27.080 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:10:27.080 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.080 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:27.080 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:27.080 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.080 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:27.080 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.080 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.080 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.080 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.080 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.080 18:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.080 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.080 18:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.081 18:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.081 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.081 "name": "Existed_Raid", 00:10:27.081 "uuid": "706bf96e-3752-48b9-9053-a4a72ef6dd98", 00:10:27.081 "strip_size_kb": 64, 00:10:27.081 "state": "offline", 00:10:27.081 "raid_level": "concat", 00:10:27.081 "superblock": false, 00:10:27.081 "num_base_bdevs": 2, 00:10:27.081 "num_base_bdevs_discovered": 1, 00:10:27.081 "num_base_bdevs_operational": 1, 00:10:27.081 "base_bdevs_list": [ 00:10:27.081 { 00:10:27.081 "name": null, 00:10:27.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.081 "is_configured": false, 00:10:27.081 "data_offset": 0, 00:10:27.081 "data_size": 65536 00:10:27.081 }, 00:10:27.081 { 00:10:27.081 "name": "BaseBdev2", 00:10:27.081 "uuid": "50642126-018a-4ff3-a26d-e64d806102f4", 00:10:27.081 "is_configured": true, 00:10:27.081 "data_offset": 0, 00:10:27.081 "data_size": 65536 00:10:27.081 } 00:10:27.081 ] 00:10:27.081 }' 00:10:27.081 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.081 18:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.664 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:27.664 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:27.664 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.664 18:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.664 18:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.664 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:27.665 18:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.665 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:27.665 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:27.665 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:27.665 18:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.665 18:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.665 [2024-12-06 18:08:52.909387] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:27.665 [2024-12-06 18:08:52.909458] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:27.665 18:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.665 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:27.665 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:27.665 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.665 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:27.665 18:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.665 18:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.665 18:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.665 18:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:27.665 18:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:27.665 18:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:27.665 18:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61724 00:10:27.665 18:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61724 ']' 00:10:27.665 18:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61724 00:10:27.665 18:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:27.665 18:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:27.665 18:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61724 00:10:27.665 18:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:27.665 killing process with pid 61724 00:10:27.665 18:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:27.665 18:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61724' 00:10:27.665 18:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61724 00:10:27.665 [2024-12-06 18:08:53.101978] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:27.665 18:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61724 00:10:27.665 [2024-12-06 18:08:53.116722] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:29.038 18:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:29.038 00:10:29.038 real 0m5.358s 00:10:29.038 user 0m8.073s 00:10:29.038 sys 0m0.701s 00:10:29.038 18:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:29.039 18:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.039 ************************************ 00:10:29.039 END TEST raid_state_function_test 00:10:29.039 ************************************ 00:10:29.039 18:08:54 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:10:29.039 18:08:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:29.039 18:08:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:29.039 18:08:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:29.039 ************************************ 00:10:29.039 START TEST raid_state_function_test_sb 00:10:29.039 ************************************ 00:10:29.039 18:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:10:29.039 18:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:29.039 18:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:29.039 18:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:29.039 18:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:29.039 18:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:29.039 18:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:29.039 18:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:29.039 18:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:29.039 18:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:29.039 18:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:29.039 18:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:29.039 18:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:29.039 Process raid pid: 61977 00:10:29.039 18:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:29.039 18:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:29.039 18:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:29.039 18:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:29.039 18:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:29.039 18:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:29.039 18:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:29.039 18:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:29.039 18:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:29.039 18:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:29.039 18:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:29.039 18:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61977 00:10:29.039 18:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61977' 00:10:29.039 18:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61977 00:10:29.039 18:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:29.039 18:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61977 ']' 00:10:29.039 18:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.039 18:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:29.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.039 18:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.039 18:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:29.039 18:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.039 [2024-12-06 18:08:54.318283] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:10:29.039 [2024-12-06 18:08:54.318735] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:29.039 [2024-12-06 18:08:54.505092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.297 [2024-12-06 18:08:54.634860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.556 [2024-12-06 18:08:54.839718] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:29.556 [2024-12-06 18:08:54.839979] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:29.814 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:29.814 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:29.814 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:29.814 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.814 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.814 [2024-12-06 18:08:55.317860] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:29.814 [2024-12-06 18:08:55.317929] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:29.814 [2024-12-06 18:08:55.317946] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:29.814 [2024-12-06 18:08:55.317963] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:29.814 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.814 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:29.814 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.814 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.814 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:29.814 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.814 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:29.815 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.815 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.815 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.815 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.815 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.815 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.815 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.815 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.073 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.073 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.073 "name": "Existed_Raid", 00:10:30.073 "uuid": "2bc10408-9302-4b97-9cf5-6204e7f158a2", 00:10:30.073 "strip_size_kb": 64, 00:10:30.073 "state": "configuring", 00:10:30.073 "raid_level": "concat", 00:10:30.073 "superblock": true, 00:10:30.073 "num_base_bdevs": 2, 00:10:30.073 "num_base_bdevs_discovered": 0, 00:10:30.073 "num_base_bdevs_operational": 2, 00:10:30.073 "base_bdevs_list": [ 00:10:30.073 { 00:10:30.073 "name": "BaseBdev1", 00:10:30.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.073 "is_configured": false, 00:10:30.073 "data_offset": 0, 00:10:30.073 "data_size": 0 00:10:30.073 }, 00:10:30.073 { 00:10:30.073 "name": "BaseBdev2", 00:10:30.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.073 "is_configured": false, 00:10:30.073 "data_offset": 0, 00:10:30.073 "data_size": 0 00:10:30.073 } 00:10:30.073 ] 00:10:30.073 }' 00:10:30.073 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.073 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.332 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:30.332 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.332 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.332 [2024-12-06 18:08:55.801909] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:30.332 [2024-12-06 18:08:55.802077] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:30.332 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.332 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:30.332 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.332 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.332 [2024-12-06 18:08:55.809911] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:30.332 [2024-12-06 18:08:55.809963] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:30.332 [2024-12-06 18:08:55.809979] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:30.332 [2024-12-06 18:08:55.809998] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:30.332 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.332 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:30.332 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.332 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.591 [2024-12-06 18:08:55.854959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:30.591 BaseBdev1 00:10:30.591 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.591 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:30.591 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:30.591 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:30.591 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:30.591 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:30.591 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:30.591 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:30.591 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.591 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.591 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.591 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:30.591 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.591 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.591 [ 00:10:30.591 { 00:10:30.591 "name": "BaseBdev1", 00:10:30.591 "aliases": [ 00:10:30.591 "361c0604-12e0-41cb-bfc5-f026ed62e603" 00:10:30.591 ], 00:10:30.591 "product_name": "Malloc disk", 00:10:30.591 "block_size": 512, 00:10:30.591 "num_blocks": 65536, 00:10:30.591 "uuid": "361c0604-12e0-41cb-bfc5-f026ed62e603", 00:10:30.591 "assigned_rate_limits": { 00:10:30.591 "rw_ios_per_sec": 0, 00:10:30.591 "rw_mbytes_per_sec": 0, 00:10:30.591 "r_mbytes_per_sec": 0, 00:10:30.591 "w_mbytes_per_sec": 0 00:10:30.591 }, 00:10:30.591 "claimed": true, 00:10:30.591 "claim_type": "exclusive_write", 00:10:30.591 "zoned": false, 00:10:30.591 "supported_io_types": { 00:10:30.591 "read": true, 00:10:30.591 "write": true, 00:10:30.591 "unmap": true, 00:10:30.591 "flush": true, 00:10:30.591 "reset": true, 00:10:30.591 "nvme_admin": false, 00:10:30.591 "nvme_io": false, 00:10:30.591 "nvme_io_md": false, 00:10:30.591 "write_zeroes": true, 00:10:30.591 "zcopy": true, 00:10:30.591 "get_zone_info": false, 00:10:30.591 "zone_management": false, 00:10:30.591 "zone_append": false, 00:10:30.591 "compare": false, 00:10:30.591 "compare_and_write": false, 00:10:30.591 "abort": true, 00:10:30.591 "seek_hole": false, 00:10:30.591 "seek_data": false, 00:10:30.591 "copy": true, 00:10:30.591 "nvme_iov_md": false 00:10:30.591 }, 00:10:30.591 "memory_domains": [ 00:10:30.591 { 00:10:30.591 "dma_device_id": "system", 00:10:30.591 "dma_device_type": 1 00:10:30.591 }, 00:10:30.591 { 00:10:30.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.591 "dma_device_type": 2 00:10:30.591 } 00:10:30.591 ], 00:10:30.591 "driver_specific": {} 00:10:30.591 } 00:10:30.591 ] 00:10:30.591 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.591 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:30.591 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:30.591 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.591 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.591 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:30.591 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.591 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:30.591 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.591 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.591 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.591 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.591 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.591 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.591 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.591 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.591 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.591 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.591 "name": "Existed_Raid", 00:10:30.591 "uuid": "dcda806d-290a-42c0-ab5b-433ac4131552", 00:10:30.591 "strip_size_kb": 64, 00:10:30.591 "state": "configuring", 00:10:30.591 "raid_level": "concat", 00:10:30.591 "superblock": true, 00:10:30.591 "num_base_bdevs": 2, 00:10:30.591 "num_base_bdevs_discovered": 1, 00:10:30.591 "num_base_bdevs_operational": 2, 00:10:30.591 "base_bdevs_list": [ 00:10:30.591 { 00:10:30.591 "name": "BaseBdev1", 00:10:30.591 "uuid": "361c0604-12e0-41cb-bfc5-f026ed62e603", 00:10:30.591 "is_configured": true, 00:10:30.591 "data_offset": 2048, 00:10:30.591 "data_size": 63488 00:10:30.591 }, 00:10:30.591 { 00:10:30.591 "name": "BaseBdev2", 00:10:30.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.591 "is_configured": false, 00:10:30.591 "data_offset": 0, 00:10:30.591 "data_size": 0 00:10:30.591 } 00:10:30.591 ] 00:10:30.591 }' 00:10:30.591 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.591 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.160 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:31.160 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.160 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.160 [2024-12-06 18:08:56.395244] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:31.160 [2024-12-06 18:08:56.395306] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:31.160 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.160 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:31.160 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.160 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.160 [2024-12-06 18:08:56.403318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:31.160 [2024-12-06 18:08:56.405795] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:31.160 [2024-12-06 18:08:56.405983] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:31.160 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.160 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:31.160 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:31.160 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:31.160 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.160 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.160 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:31.160 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.160 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:31.160 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.160 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.160 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.160 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.160 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.160 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.160 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.160 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.160 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.160 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.160 "name": "Existed_Raid", 00:10:31.160 "uuid": "950219e1-3888-4dd8-95e9-addac2125f0b", 00:10:31.160 "strip_size_kb": 64, 00:10:31.160 "state": "configuring", 00:10:31.160 "raid_level": "concat", 00:10:31.160 "superblock": true, 00:10:31.160 "num_base_bdevs": 2, 00:10:31.160 "num_base_bdevs_discovered": 1, 00:10:31.160 "num_base_bdevs_operational": 2, 00:10:31.160 "base_bdevs_list": [ 00:10:31.160 { 00:10:31.160 "name": "BaseBdev1", 00:10:31.160 "uuid": "361c0604-12e0-41cb-bfc5-f026ed62e603", 00:10:31.160 "is_configured": true, 00:10:31.160 "data_offset": 2048, 00:10:31.160 "data_size": 63488 00:10:31.160 }, 00:10:31.160 { 00:10:31.160 "name": "BaseBdev2", 00:10:31.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.160 "is_configured": false, 00:10:31.160 "data_offset": 0, 00:10:31.160 "data_size": 0 00:10:31.160 } 00:10:31.160 ] 00:10:31.160 }' 00:10:31.160 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.160 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.420 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:31.420 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.420 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.679 [2024-12-06 18:08:56.946331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:31.679 [2024-12-06 18:08:56.946884] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:31.679 [2024-12-06 18:08:56.947025] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:31.679 BaseBdev2 00:10:31.679 [2024-12-06 18:08:56.947414] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:31.679 [2024-12-06 18:08:56.947635] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:31.679 [2024-12-06 18:08:56.947659] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:31.679 [2024-12-06 18:08:56.947859] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:31.679 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.679 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:31.679 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:31.679 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:31.679 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:31.679 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:31.679 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:31.679 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:31.679 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.679 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.679 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.679 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:31.679 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.679 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.679 [ 00:10:31.679 { 00:10:31.679 "name": "BaseBdev2", 00:10:31.679 "aliases": [ 00:10:31.679 "297e03ba-80e9-4bd5-861b-89476c0e92d7" 00:10:31.679 ], 00:10:31.679 "product_name": "Malloc disk", 00:10:31.679 "block_size": 512, 00:10:31.679 "num_blocks": 65536, 00:10:31.679 "uuid": "297e03ba-80e9-4bd5-861b-89476c0e92d7", 00:10:31.679 "assigned_rate_limits": { 00:10:31.679 "rw_ios_per_sec": 0, 00:10:31.679 "rw_mbytes_per_sec": 0, 00:10:31.679 "r_mbytes_per_sec": 0, 00:10:31.679 "w_mbytes_per_sec": 0 00:10:31.679 }, 00:10:31.679 "claimed": true, 00:10:31.679 "claim_type": "exclusive_write", 00:10:31.679 "zoned": false, 00:10:31.679 "supported_io_types": { 00:10:31.679 "read": true, 00:10:31.679 "write": true, 00:10:31.679 "unmap": true, 00:10:31.679 "flush": true, 00:10:31.679 "reset": true, 00:10:31.679 "nvme_admin": false, 00:10:31.679 "nvme_io": false, 00:10:31.679 "nvme_io_md": false, 00:10:31.679 "write_zeroes": true, 00:10:31.679 "zcopy": true, 00:10:31.679 "get_zone_info": false, 00:10:31.679 "zone_management": false, 00:10:31.679 "zone_append": false, 00:10:31.679 "compare": false, 00:10:31.679 "compare_and_write": false, 00:10:31.679 "abort": true, 00:10:31.679 "seek_hole": false, 00:10:31.679 "seek_data": false, 00:10:31.679 "copy": true, 00:10:31.679 "nvme_iov_md": false 00:10:31.679 }, 00:10:31.679 "memory_domains": [ 00:10:31.679 { 00:10:31.679 "dma_device_id": "system", 00:10:31.679 "dma_device_type": 1 00:10:31.679 }, 00:10:31.679 { 00:10:31.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.679 "dma_device_type": 2 00:10:31.679 } 00:10:31.679 ], 00:10:31.679 "driver_specific": {} 00:10:31.679 } 00:10:31.679 ] 00:10:31.679 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.679 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:31.679 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:31.679 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:31.679 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:10:31.679 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.679 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:31.679 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:31.679 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.679 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:31.679 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.679 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.679 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.679 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.679 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.680 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.680 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.680 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.680 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.680 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.680 "name": "Existed_Raid", 00:10:31.680 "uuid": "950219e1-3888-4dd8-95e9-addac2125f0b", 00:10:31.680 "strip_size_kb": 64, 00:10:31.680 "state": "online", 00:10:31.680 "raid_level": "concat", 00:10:31.680 "superblock": true, 00:10:31.680 "num_base_bdevs": 2, 00:10:31.680 "num_base_bdevs_discovered": 2, 00:10:31.680 "num_base_bdevs_operational": 2, 00:10:31.680 "base_bdevs_list": [ 00:10:31.680 { 00:10:31.680 "name": "BaseBdev1", 00:10:31.680 "uuid": "361c0604-12e0-41cb-bfc5-f026ed62e603", 00:10:31.680 "is_configured": true, 00:10:31.680 "data_offset": 2048, 00:10:31.680 "data_size": 63488 00:10:31.680 }, 00:10:31.680 { 00:10:31.680 "name": "BaseBdev2", 00:10:31.680 "uuid": "297e03ba-80e9-4bd5-861b-89476c0e92d7", 00:10:31.680 "is_configured": true, 00:10:31.680 "data_offset": 2048, 00:10:31.680 "data_size": 63488 00:10:31.680 } 00:10:31.680 ] 00:10:31.680 }' 00:10:31.680 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.680 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.247 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:32.247 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:32.247 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:32.247 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:32.247 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:32.247 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:32.247 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:32.247 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:32.247 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.247 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.247 [2024-12-06 18:08:57.486932] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:32.247 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.247 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:32.247 "name": "Existed_Raid", 00:10:32.247 "aliases": [ 00:10:32.247 "950219e1-3888-4dd8-95e9-addac2125f0b" 00:10:32.247 ], 00:10:32.247 "product_name": "Raid Volume", 00:10:32.247 "block_size": 512, 00:10:32.247 "num_blocks": 126976, 00:10:32.247 "uuid": "950219e1-3888-4dd8-95e9-addac2125f0b", 00:10:32.247 "assigned_rate_limits": { 00:10:32.247 "rw_ios_per_sec": 0, 00:10:32.247 "rw_mbytes_per_sec": 0, 00:10:32.247 "r_mbytes_per_sec": 0, 00:10:32.247 "w_mbytes_per_sec": 0 00:10:32.247 }, 00:10:32.247 "claimed": false, 00:10:32.247 "zoned": false, 00:10:32.247 "supported_io_types": { 00:10:32.247 "read": true, 00:10:32.247 "write": true, 00:10:32.247 "unmap": true, 00:10:32.247 "flush": true, 00:10:32.247 "reset": true, 00:10:32.247 "nvme_admin": false, 00:10:32.247 "nvme_io": false, 00:10:32.247 "nvme_io_md": false, 00:10:32.247 "write_zeroes": true, 00:10:32.247 "zcopy": false, 00:10:32.247 "get_zone_info": false, 00:10:32.247 "zone_management": false, 00:10:32.247 "zone_append": false, 00:10:32.247 "compare": false, 00:10:32.247 "compare_and_write": false, 00:10:32.248 "abort": false, 00:10:32.248 "seek_hole": false, 00:10:32.248 "seek_data": false, 00:10:32.248 "copy": false, 00:10:32.248 "nvme_iov_md": false 00:10:32.248 }, 00:10:32.248 "memory_domains": [ 00:10:32.248 { 00:10:32.248 "dma_device_id": "system", 00:10:32.248 "dma_device_type": 1 00:10:32.248 }, 00:10:32.248 { 00:10:32.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.248 "dma_device_type": 2 00:10:32.248 }, 00:10:32.248 { 00:10:32.248 "dma_device_id": "system", 00:10:32.248 "dma_device_type": 1 00:10:32.248 }, 00:10:32.248 { 00:10:32.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.248 "dma_device_type": 2 00:10:32.248 } 00:10:32.248 ], 00:10:32.248 "driver_specific": { 00:10:32.248 "raid": { 00:10:32.248 "uuid": "950219e1-3888-4dd8-95e9-addac2125f0b", 00:10:32.248 "strip_size_kb": 64, 00:10:32.248 "state": "online", 00:10:32.248 "raid_level": "concat", 00:10:32.248 "superblock": true, 00:10:32.248 "num_base_bdevs": 2, 00:10:32.248 "num_base_bdevs_discovered": 2, 00:10:32.248 "num_base_bdevs_operational": 2, 00:10:32.248 "base_bdevs_list": [ 00:10:32.248 { 00:10:32.248 "name": "BaseBdev1", 00:10:32.248 "uuid": "361c0604-12e0-41cb-bfc5-f026ed62e603", 00:10:32.248 "is_configured": true, 00:10:32.248 "data_offset": 2048, 00:10:32.248 "data_size": 63488 00:10:32.248 }, 00:10:32.248 { 00:10:32.248 "name": "BaseBdev2", 00:10:32.248 "uuid": "297e03ba-80e9-4bd5-861b-89476c0e92d7", 00:10:32.248 "is_configured": true, 00:10:32.248 "data_offset": 2048, 00:10:32.248 "data_size": 63488 00:10:32.248 } 00:10:32.248 ] 00:10:32.248 } 00:10:32.248 } 00:10:32.248 }' 00:10:32.248 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:32.248 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:32.248 BaseBdev2' 00:10:32.248 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.248 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:32.248 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:32.248 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:32.248 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.248 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.248 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.248 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.248 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:32.248 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:32.248 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:32.248 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:32.248 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.248 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.248 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.248 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.248 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:32.248 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:32.248 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:32.248 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.248 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.248 [2024-12-06 18:08:57.742703] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:32.248 [2024-12-06 18:08:57.742763] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:32.248 [2024-12-06 18:08:57.742862] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:32.512 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.512 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:32.512 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:32.512 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:32.512 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:32.512 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:32.512 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:10:32.512 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.512 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:32.512 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:32.512 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.512 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:32.512 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.512 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.512 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.512 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.512 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.512 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.512 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.512 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.512 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.512 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.512 "name": "Existed_Raid", 00:10:32.512 "uuid": "950219e1-3888-4dd8-95e9-addac2125f0b", 00:10:32.512 "strip_size_kb": 64, 00:10:32.512 "state": "offline", 00:10:32.512 "raid_level": "concat", 00:10:32.512 "superblock": true, 00:10:32.512 "num_base_bdevs": 2, 00:10:32.512 "num_base_bdevs_discovered": 1, 00:10:32.512 "num_base_bdevs_operational": 1, 00:10:32.512 "base_bdevs_list": [ 00:10:32.512 { 00:10:32.512 "name": null, 00:10:32.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.512 "is_configured": false, 00:10:32.512 "data_offset": 0, 00:10:32.512 "data_size": 63488 00:10:32.512 }, 00:10:32.512 { 00:10:32.512 "name": "BaseBdev2", 00:10:32.512 "uuid": "297e03ba-80e9-4bd5-861b-89476c0e92d7", 00:10:32.512 "is_configured": true, 00:10:32.512 "data_offset": 2048, 00:10:32.512 "data_size": 63488 00:10:32.512 } 00:10:32.512 ] 00:10:32.512 }' 00:10:32.512 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.512 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.080 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:33.080 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:33.080 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.080 18:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.080 18:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.080 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:33.080 18:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.080 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:33.080 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:33.080 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:33.080 18:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.080 18:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.080 [2024-12-06 18:08:58.414148] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:33.080 [2024-12-06 18:08:58.414354] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:33.080 18:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.080 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:33.080 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:33.080 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.080 18:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.080 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:33.080 18:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.080 18:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.081 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:33.081 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:33.081 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:33.081 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61977 00:10:33.081 18:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61977 ']' 00:10:33.081 18:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61977 00:10:33.081 18:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:33.081 18:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:33.081 18:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61977 00:10:33.339 killing process with pid 61977 00:10:33.339 18:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:33.339 18:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:33.339 18:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61977' 00:10:33.339 18:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61977 00:10:33.339 [2024-12-06 18:08:58.599067] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:33.339 18:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61977 00:10:33.339 [2024-12-06 18:08:58.613910] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:34.275 18:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:34.275 00:10:34.275 real 0m5.459s 00:10:34.275 user 0m8.195s 00:10:34.275 sys 0m0.791s 00:10:34.275 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:34.275 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.275 ************************************ 00:10:34.275 END TEST raid_state_function_test_sb 00:10:34.275 ************************************ 00:10:34.275 18:08:59 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:10:34.275 18:08:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:34.275 18:08:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:34.275 18:08:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:34.275 ************************************ 00:10:34.275 START TEST raid_superblock_test 00:10:34.275 ************************************ 00:10:34.275 18:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:10:34.275 18:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:34.275 18:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:10:34.275 18:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:34.275 18:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:34.275 18:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:34.275 18:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:34.275 18:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:34.275 18:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:34.275 18:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:34.275 18:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:34.275 18:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:34.275 18:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:34.275 18:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:34.275 18:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:34.275 18:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:34.275 18:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:34.275 18:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62234 00:10:34.275 18:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62234 00:10:34.275 18:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:34.275 18:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62234 ']' 00:10:34.275 18:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.275 18:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:34.275 18:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.275 18:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:34.275 18:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.534 [2024-12-06 18:08:59.828087] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:10:34.534 [2024-12-06 18:08:59.828269] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62234 ] 00:10:34.534 [2024-12-06 18:09:00.017492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.792 [2024-12-06 18:09:00.174466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.050 [2024-12-06 18:09:00.416554] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:35.050 [2024-12-06 18:09:00.416616] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.616 malloc1 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.616 [2024-12-06 18:09:00.901968] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:35.616 [2024-12-06 18:09:00.902041] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:35.616 [2024-12-06 18:09:00.902075] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:35.616 [2024-12-06 18:09:00.902092] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:35.616 [2024-12-06 18:09:00.904885] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:35.616 [2024-12-06 18:09:00.904933] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:35.616 pt1 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.616 malloc2 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.616 [2024-12-06 18:09:00.955281] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:35.616 [2024-12-06 18:09:00.955511] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:35.616 [2024-12-06 18:09:00.955563] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:35.616 [2024-12-06 18:09:00.955580] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:35.616 [2024-12-06 18:09:00.958379] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:35.616 [2024-12-06 18:09:00.958425] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:35.616 pt2 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.616 [2024-12-06 18:09:00.967364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:35.616 [2024-12-06 18:09:00.969925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:35.616 [2024-12-06 18:09:00.970275] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:35.616 [2024-12-06 18:09:00.970419] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:35.616 [2024-12-06 18:09:00.970812] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:35.616 [2024-12-06 18:09:00.971146] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:35.616 [2024-12-06 18:09:00.971283] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:35.616 [2024-12-06 18:09:00.971632] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.616 18:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.616 18:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.616 "name": "raid_bdev1", 00:10:35.616 "uuid": "66a7c900-d6a2-4c08-bd71-466f72b31342", 00:10:35.616 "strip_size_kb": 64, 00:10:35.616 "state": "online", 00:10:35.616 "raid_level": "concat", 00:10:35.616 "superblock": true, 00:10:35.616 "num_base_bdevs": 2, 00:10:35.616 "num_base_bdevs_discovered": 2, 00:10:35.616 "num_base_bdevs_operational": 2, 00:10:35.616 "base_bdevs_list": [ 00:10:35.616 { 00:10:35.616 "name": "pt1", 00:10:35.616 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:35.616 "is_configured": true, 00:10:35.616 "data_offset": 2048, 00:10:35.616 "data_size": 63488 00:10:35.616 }, 00:10:35.616 { 00:10:35.616 "name": "pt2", 00:10:35.616 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:35.616 "is_configured": true, 00:10:35.616 "data_offset": 2048, 00:10:35.616 "data_size": 63488 00:10:35.616 } 00:10:35.616 ] 00:10:35.616 }' 00:10:35.616 18:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.616 18:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.183 18:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:36.183 18:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:36.183 18:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:36.183 18:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:36.183 18:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:36.183 18:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:36.183 18:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:36.183 18:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.183 18:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.183 18:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:36.183 [2024-12-06 18:09:01.472126] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:36.183 18:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.183 18:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:36.183 "name": "raid_bdev1", 00:10:36.183 "aliases": [ 00:10:36.183 "66a7c900-d6a2-4c08-bd71-466f72b31342" 00:10:36.183 ], 00:10:36.183 "product_name": "Raid Volume", 00:10:36.183 "block_size": 512, 00:10:36.183 "num_blocks": 126976, 00:10:36.183 "uuid": "66a7c900-d6a2-4c08-bd71-466f72b31342", 00:10:36.183 "assigned_rate_limits": { 00:10:36.183 "rw_ios_per_sec": 0, 00:10:36.183 "rw_mbytes_per_sec": 0, 00:10:36.183 "r_mbytes_per_sec": 0, 00:10:36.183 "w_mbytes_per_sec": 0 00:10:36.183 }, 00:10:36.183 "claimed": false, 00:10:36.183 "zoned": false, 00:10:36.183 "supported_io_types": { 00:10:36.183 "read": true, 00:10:36.183 "write": true, 00:10:36.183 "unmap": true, 00:10:36.183 "flush": true, 00:10:36.183 "reset": true, 00:10:36.183 "nvme_admin": false, 00:10:36.183 "nvme_io": false, 00:10:36.183 "nvme_io_md": false, 00:10:36.183 "write_zeroes": true, 00:10:36.183 "zcopy": false, 00:10:36.183 "get_zone_info": false, 00:10:36.183 "zone_management": false, 00:10:36.183 "zone_append": false, 00:10:36.183 "compare": false, 00:10:36.183 "compare_and_write": false, 00:10:36.183 "abort": false, 00:10:36.183 "seek_hole": false, 00:10:36.183 "seek_data": false, 00:10:36.183 "copy": false, 00:10:36.183 "nvme_iov_md": false 00:10:36.183 }, 00:10:36.183 "memory_domains": [ 00:10:36.183 { 00:10:36.183 "dma_device_id": "system", 00:10:36.183 "dma_device_type": 1 00:10:36.183 }, 00:10:36.183 { 00:10:36.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.183 "dma_device_type": 2 00:10:36.183 }, 00:10:36.183 { 00:10:36.183 "dma_device_id": "system", 00:10:36.183 "dma_device_type": 1 00:10:36.183 }, 00:10:36.183 { 00:10:36.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.183 "dma_device_type": 2 00:10:36.183 } 00:10:36.183 ], 00:10:36.183 "driver_specific": { 00:10:36.183 "raid": { 00:10:36.183 "uuid": "66a7c900-d6a2-4c08-bd71-466f72b31342", 00:10:36.183 "strip_size_kb": 64, 00:10:36.183 "state": "online", 00:10:36.183 "raid_level": "concat", 00:10:36.183 "superblock": true, 00:10:36.183 "num_base_bdevs": 2, 00:10:36.183 "num_base_bdevs_discovered": 2, 00:10:36.183 "num_base_bdevs_operational": 2, 00:10:36.183 "base_bdevs_list": [ 00:10:36.184 { 00:10:36.184 "name": "pt1", 00:10:36.184 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:36.184 "is_configured": true, 00:10:36.184 "data_offset": 2048, 00:10:36.184 "data_size": 63488 00:10:36.184 }, 00:10:36.184 { 00:10:36.184 "name": "pt2", 00:10:36.184 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:36.184 "is_configured": true, 00:10:36.184 "data_offset": 2048, 00:10:36.184 "data_size": 63488 00:10:36.184 } 00:10:36.184 ] 00:10:36.184 } 00:10:36.184 } 00:10:36.184 }' 00:10:36.184 18:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:36.184 18:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:36.184 pt2' 00:10:36.184 18:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.184 18:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:36.184 18:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.184 18:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.184 18:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:36.184 18:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.184 18:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.184 18:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.184 18:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.184 18:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.184 18:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.184 18:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:36.184 18:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.184 18:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.184 18:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.184 18:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.443 [2024-12-06 18:09:01.748171] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=66a7c900-d6a2-4c08-bd71-466f72b31342 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 66a7c900-d6a2-4c08-bd71-466f72b31342 ']' 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.443 [2024-12-06 18:09:01.795858] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:36.443 [2024-12-06 18:09:01.796013] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:36.443 [2024-12-06 18:09:01.796224] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:36.443 [2024-12-06 18:09:01.796389] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:36.443 [2024-12-06 18:09:01.796542] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.443 [2024-12-06 18:09:01.935973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:36.443 [2024-12-06 18:09:01.938608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:36.443 [2024-12-06 18:09:01.938838] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:36.443 [2024-12-06 18:09:01.939048] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:36.443 [2024-12-06 18:09:01.939335] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:36.443 [2024-12-06 18:09:01.939453] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:36.443 request: 00:10:36.443 { 00:10:36.443 "name": "raid_bdev1", 00:10:36.443 "raid_level": "concat", 00:10:36.443 "base_bdevs": [ 00:10:36.443 "malloc1", 00:10:36.443 "malloc2" 00:10:36.443 ], 00:10:36.443 "strip_size_kb": 64, 00:10:36.443 "superblock": false, 00:10:36.443 "method": "bdev_raid_create", 00:10:36.443 "req_id": 1 00:10:36.443 } 00:10:36.443 Got JSON-RPC error response 00:10:36.443 response: 00:10:36.443 { 00:10:36.443 "code": -17, 00:10:36.443 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:36.443 } 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.443 18:09:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.702 18:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:36.702 18:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:36.702 18:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:36.702 18:09:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.702 18:09:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.702 [2024-12-06 18:09:02.012086] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:36.702 [2024-12-06 18:09:02.012278] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.702 [2024-12-06 18:09:02.012315] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:36.702 [2024-12-06 18:09:02.012334] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.702 [2024-12-06 18:09:02.015225] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.702 [2024-12-06 18:09:02.015275] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:36.702 [2024-12-06 18:09:02.015374] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:36.702 [2024-12-06 18:09:02.015458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:36.702 pt1 00:10:36.702 18:09:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.702 18:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:10:36.702 18:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:36.702 18:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.702 18:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:36.702 18:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.702 18:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:36.702 18:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.702 18:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.702 18:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.702 18:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.702 18:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.702 18:09:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.702 18:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:36.702 18:09:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.702 18:09:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.702 18:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.702 "name": "raid_bdev1", 00:10:36.702 "uuid": "66a7c900-d6a2-4c08-bd71-466f72b31342", 00:10:36.702 "strip_size_kb": 64, 00:10:36.702 "state": "configuring", 00:10:36.702 "raid_level": "concat", 00:10:36.702 "superblock": true, 00:10:36.702 "num_base_bdevs": 2, 00:10:36.702 "num_base_bdevs_discovered": 1, 00:10:36.702 "num_base_bdevs_operational": 2, 00:10:36.702 "base_bdevs_list": [ 00:10:36.702 { 00:10:36.702 "name": "pt1", 00:10:36.702 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:36.702 "is_configured": true, 00:10:36.702 "data_offset": 2048, 00:10:36.702 "data_size": 63488 00:10:36.702 }, 00:10:36.702 { 00:10:36.702 "name": null, 00:10:36.702 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:36.702 "is_configured": false, 00:10:36.702 "data_offset": 2048, 00:10:36.702 "data_size": 63488 00:10:36.702 } 00:10:36.702 ] 00:10:36.702 }' 00:10:36.702 18:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.702 18:09:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.267 18:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:10:37.267 18:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:37.267 18:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:37.267 18:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:37.267 18:09:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.267 18:09:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.267 [2024-12-06 18:09:02.576326] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:37.267 [2024-12-06 18:09:02.576416] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:37.267 [2024-12-06 18:09:02.576465] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:10:37.267 [2024-12-06 18:09:02.576484] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:37.267 [2024-12-06 18:09:02.577081] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:37.267 [2024-12-06 18:09:02.577132] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:37.267 [2024-12-06 18:09:02.577235] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:37.267 [2024-12-06 18:09:02.577284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:37.267 [2024-12-06 18:09:02.577431] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:37.267 [2024-12-06 18:09:02.577453] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:37.267 [2024-12-06 18:09:02.577752] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:37.267 [2024-12-06 18:09:02.577956] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:37.267 [2024-12-06 18:09:02.577977] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:37.267 [2024-12-06 18:09:02.578146] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:37.267 pt2 00:10:37.267 18:09:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.267 18:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:37.267 18:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:37.267 18:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:37.267 18:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:37.267 18:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:37.267 18:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:37.267 18:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.267 18:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:37.267 18:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.267 18:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.267 18:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.267 18:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.267 18:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.267 18:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:37.267 18:09:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.267 18:09:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.267 18:09:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.267 18:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.267 "name": "raid_bdev1", 00:10:37.267 "uuid": "66a7c900-d6a2-4c08-bd71-466f72b31342", 00:10:37.267 "strip_size_kb": 64, 00:10:37.267 "state": "online", 00:10:37.267 "raid_level": "concat", 00:10:37.267 "superblock": true, 00:10:37.267 "num_base_bdevs": 2, 00:10:37.267 "num_base_bdevs_discovered": 2, 00:10:37.267 "num_base_bdevs_operational": 2, 00:10:37.267 "base_bdevs_list": [ 00:10:37.267 { 00:10:37.267 "name": "pt1", 00:10:37.267 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:37.267 "is_configured": true, 00:10:37.267 "data_offset": 2048, 00:10:37.267 "data_size": 63488 00:10:37.267 }, 00:10:37.267 { 00:10:37.267 "name": "pt2", 00:10:37.267 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:37.267 "is_configured": true, 00:10:37.267 "data_offset": 2048, 00:10:37.267 "data_size": 63488 00:10:37.267 } 00:10:37.267 ] 00:10:37.267 }' 00:10:37.267 18:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.267 18:09:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.835 18:09:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:37.835 18:09:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:37.835 18:09:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:37.835 18:09:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:37.835 18:09:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:37.835 18:09:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:37.835 18:09:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:37.835 18:09:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.835 18:09:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.835 18:09:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:37.835 [2024-12-06 18:09:03.104761] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:37.835 18:09:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.835 18:09:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:37.835 "name": "raid_bdev1", 00:10:37.835 "aliases": [ 00:10:37.835 "66a7c900-d6a2-4c08-bd71-466f72b31342" 00:10:37.835 ], 00:10:37.835 "product_name": "Raid Volume", 00:10:37.835 "block_size": 512, 00:10:37.835 "num_blocks": 126976, 00:10:37.835 "uuid": "66a7c900-d6a2-4c08-bd71-466f72b31342", 00:10:37.835 "assigned_rate_limits": { 00:10:37.835 "rw_ios_per_sec": 0, 00:10:37.835 "rw_mbytes_per_sec": 0, 00:10:37.835 "r_mbytes_per_sec": 0, 00:10:37.835 "w_mbytes_per_sec": 0 00:10:37.835 }, 00:10:37.835 "claimed": false, 00:10:37.835 "zoned": false, 00:10:37.835 "supported_io_types": { 00:10:37.835 "read": true, 00:10:37.835 "write": true, 00:10:37.835 "unmap": true, 00:10:37.835 "flush": true, 00:10:37.835 "reset": true, 00:10:37.835 "nvme_admin": false, 00:10:37.835 "nvme_io": false, 00:10:37.835 "nvme_io_md": false, 00:10:37.835 "write_zeroes": true, 00:10:37.835 "zcopy": false, 00:10:37.835 "get_zone_info": false, 00:10:37.835 "zone_management": false, 00:10:37.835 "zone_append": false, 00:10:37.835 "compare": false, 00:10:37.835 "compare_and_write": false, 00:10:37.835 "abort": false, 00:10:37.835 "seek_hole": false, 00:10:37.835 "seek_data": false, 00:10:37.835 "copy": false, 00:10:37.835 "nvme_iov_md": false 00:10:37.835 }, 00:10:37.835 "memory_domains": [ 00:10:37.835 { 00:10:37.835 "dma_device_id": "system", 00:10:37.835 "dma_device_type": 1 00:10:37.835 }, 00:10:37.835 { 00:10:37.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.835 "dma_device_type": 2 00:10:37.835 }, 00:10:37.835 { 00:10:37.835 "dma_device_id": "system", 00:10:37.835 "dma_device_type": 1 00:10:37.835 }, 00:10:37.835 { 00:10:37.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.835 "dma_device_type": 2 00:10:37.835 } 00:10:37.835 ], 00:10:37.835 "driver_specific": { 00:10:37.835 "raid": { 00:10:37.835 "uuid": "66a7c900-d6a2-4c08-bd71-466f72b31342", 00:10:37.835 "strip_size_kb": 64, 00:10:37.835 "state": "online", 00:10:37.835 "raid_level": "concat", 00:10:37.835 "superblock": true, 00:10:37.835 "num_base_bdevs": 2, 00:10:37.835 "num_base_bdevs_discovered": 2, 00:10:37.835 "num_base_bdevs_operational": 2, 00:10:37.835 "base_bdevs_list": [ 00:10:37.835 { 00:10:37.835 "name": "pt1", 00:10:37.835 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:37.835 "is_configured": true, 00:10:37.835 "data_offset": 2048, 00:10:37.835 "data_size": 63488 00:10:37.835 }, 00:10:37.835 { 00:10:37.835 "name": "pt2", 00:10:37.835 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:37.835 "is_configured": true, 00:10:37.835 "data_offset": 2048, 00:10:37.835 "data_size": 63488 00:10:37.835 } 00:10:37.835 ] 00:10:37.835 } 00:10:37.835 } 00:10:37.835 }' 00:10:37.835 18:09:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:37.835 18:09:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:37.835 pt2' 00:10:37.836 18:09:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.836 18:09:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:37.836 18:09:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:37.836 18:09:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:37.836 18:09:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.836 18:09:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.836 18:09:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.836 18:09:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.836 18:09:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:37.836 18:09:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:37.836 18:09:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:37.836 18:09:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.836 18:09:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:37.836 18:09:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.836 18:09:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.836 18:09:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.102 18:09:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.102 18:09:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.102 18:09:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:38.102 18:09:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:38.102 18:09:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.102 18:09:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.102 [2024-12-06 18:09:03.360863] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:38.102 18:09:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.102 18:09:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 66a7c900-d6a2-4c08-bd71-466f72b31342 '!=' 66a7c900-d6a2-4c08-bd71-466f72b31342 ']' 00:10:38.102 18:09:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:10:38.102 18:09:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:38.102 18:09:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:38.102 18:09:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62234 00:10:38.103 18:09:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62234 ']' 00:10:38.103 18:09:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62234 00:10:38.103 18:09:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:38.103 18:09:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:38.103 18:09:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62234 00:10:38.103 18:09:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:38.103 18:09:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:38.103 18:09:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62234' 00:10:38.103 killing process with pid 62234 00:10:38.103 18:09:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62234 00:10:38.103 [2024-12-06 18:09:03.450468] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:38.103 18:09:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62234 00:10:38.103 [2024-12-06 18:09:03.450587] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:38.103 [2024-12-06 18:09:03.450657] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:38.103 [2024-12-06 18:09:03.450678] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:38.360 [2024-12-06 18:09:03.634260] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:39.291 18:09:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:39.291 00:10:39.291 real 0m4.983s 00:10:39.291 user 0m7.385s 00:10:39.291 sys 0m0.701s 00:10:39.292 18:09:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:39.292 18:09:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.292 ************************************ 00:10:39.292 END TEST raid_superblock_test 00:10:39.292 ************************************ 00:10:39.292 18:09:04 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:10:39.292 18:09:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:39.292 18:09:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:39.292 18:09:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:39.292 ************************************ 00:10:39.292 START TEST raid_read_error_test 00:10:39.292 ************************************ 00:10:39.292 18:09:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:10:39.292 18:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:39.292 18:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:39.292 18:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:39.292 18:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:39.292 18:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:39.292 18:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:39.292 18:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:39.292 18:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:39.292 18:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:39.292 18:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:39.292 18:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:39.292 18:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:39.292 18:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:39.292 18:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:39.292 18:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:39.292 18:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:39.292 18:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:39.292 18:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:39.292 18:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:39.292 18:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:39.292 18:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:39.292 18:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:39.292 18:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Wox7TQNuak 00:10:39.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.292 18:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62446 00:10:39.292 18:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:39.292 18:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62446 00:10:39.292 18:09:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62446 ']' 00:10:39.292 18:09:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.292 18:09:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:39.292 18:09:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.292 18:09:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:39.292 18:09:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.550 [2024-12-06 18:09:04.865669] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:10:39.550 [2024-12-06 18:09:04.866131] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62446 ] 00:10:39.550 [2024-12-06 18:09:05.048784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.807 [2024-12-06 18:09:05.177998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.064 [2024-12-06 18:09:05.381746] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:40.064 [2024-12-06 18:09:05.382019] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:40.629 18:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:40.629 18:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:40.629 18:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:40.629 18:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:40.629 18:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.629 18:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.629 BaseBdev1_malloc 00:10:40.629 18:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.629 18:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:40.629 18:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.629 18:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.629 true 00:10:40.630 18:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.630 18:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:40.630 18:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.630 18:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.630 [2024-12-06 18:09:05.923867] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:40.630 [2024-12-06 18:09:05.924148] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:40.630 [2024-12-06 18:09:05.924194] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:40.630 [2024-12-06 18:09:05.924215] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:40.630 [2024-12-06 18:09:05.927321] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:40.630 [2024-12-06 18:09:05.927500] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:40.630 BaseBdev1 00:10:40.630 18:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.630 18:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:40.630 18:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:40.630 18:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.630 18:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.630 BaseBdev2_malloc 00:10:40.630 18:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.630 18:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:40.630 18:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.630 18:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.630 true 00:10:40.630 18:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.630 18:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:40.630 18:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.630 18:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.630 [2024-12-06 18:09:05.985019] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:40.630 [2024-12-06 18:09:05.985091] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:40.630 [2024-12-06 18:09:05.985118] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:40.630 [2024-12-06 18:09:05.985136] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:40.630 [2024-12-06 18:09:05.987956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:40.630 [2024-12-06 18:09:05.988006] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:40.630 BaseBdev2 00:10:40.630 18:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.630 18:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:40.630 18:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.630 18:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.630 [2024-12-06 18:09:05.993113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:40.630 [2024-12-06 18:09:05.995575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:40.630 [2024-12-06 18:09:05.996011] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:40.630 [2024-12-06 18:09:05.996043] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:40.630 [2024-12-06 18:09:05.996378] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:40.630 [2024-12-06 18:09:05.996608] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:40.630 [2024-12-06 18:09:05.996631] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:40.630 [2024-12-06 18:09:05.996874] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:40.630 18:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.630 18:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:40.630 18:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:40.630 18:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:40.630 18:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:40.630 18:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.630 18:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:40.630 18:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.630 18:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.630 18:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.630 18:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.630 18:09:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.630 18:09:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.630 18:09:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.630 18:09:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:40.630 18:09:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.630 18:09:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.630 "name": "raid_bdev1", 00:10:40.630 "uuid": "6ceafc82-81d3-4c4e-971d-0eec69de991c", 00:10:40.630 "strip_size_kb": 64, 00:10:40.630 "state": "online", 00:10:40.630 "raid_level": "concat", 00:10:40.630 "superblock": true, 00:10:40.630 "num_base_bdevs": 2, 00:10:40.630 "num_base_bdevs_discovered": 2, 00:10:40.630 "num_base_bdevs_operational": 2, 00:10:40.630 "base_bdevs_list": [ 00:10:40.630 { 00:10:40.630 "name": "BaseBdev1", 00:10:40.630 "uuid": "3d9652b0-806f-5476-a087-b79e9c060bd4", 00:10:40.630 "is_configured": true, 00:10:40.630 "data_offset": 2048, 00:10:40.630 "data_size": 63488 00:10:40.630 }, 00:10:40.630 { 00:10:40.630 "name": "BaseBdev2", 00:10:40.630 "uuid": "a051fd17-3e6d-5ba8-9a1d-2ee99ab07565", 00:10:40.630 "is_configured": true, 00:10:40.630 "data_offset": 2048, 00:10:40.630 "data_size": 63488 00:10:40.630 } 00:10:40.630 ] 00:10:40.630 }' 00:10:40.630 18:09:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.630 18:09:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.197 18:09:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:41.197 18:09:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:41.197 [2024-12-06 18:09:06.675138] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:42.131 18:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:42.131 18:09:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.131 18:09:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.131 18:09:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.131 18:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:42.131 18:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:42.131 18:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:10:42.131 18:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:42.131 18:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:42.131 18:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:42.131 18:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:42.131 18:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.131 18:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:42.131 18:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.131 18:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.131 18:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.131 18:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.131 18:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.131 18:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:42.131 18:09:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.131 18:09:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.131 18:09:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.131 18:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.131 "name": "raid_bdev1", 00:10:42.131 "uuid": "6ceafc82-81d3-4c4e-971d-0eec69de991c", 00:10:42.131 "strip_size_kb": 64, 00:10:42.131 "state": "online", 00:10:42.131 "raid_level": "concat", 00:10:42.131 "superblock": true, 00:10:42.131 "num_base_bdevs": 2, 00:10:42.131 "num_base_bdevs_discovered": 2, 00:10:42.131 "num_base_bdevs_operational": 2, 00:10:42.131 "base_bdevs_list": [ 00:10:42.131 { 00:10:42.131 "name": "BaseBdev1", 00:10:42.131 "uuid": "3d9652b0-806f-5476-a087-b79e9c060bd4", 00:10:42.131 "is_configured": true, 00:10:42.131 "data_offset": 2048, 00:10:42.131 "data_size": 63488 00:10:42.131 }, 00:10:42.131 { 00:10:42.131 "name": "BaseBdev2", 00:10:42.131 "uuid": "a051fd17-3e6d-5ba8-9a1d-2ee99ab07565", 00:10:42.131 "is_configured": true, 00:10:42.131 "data_offset": 2048, 00:10:42.131 "data_size": 63488 00:10:42.131 } 00:10:42.131 ] 00:10:42.131 }' 00:10:42.131 18:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.131 18:09:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.733 18:09:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:42.733 18:09:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.733 18:09:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.733 [2024-12-06 18:09:08.102943] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:42.733 [2024-12-06 18:09:08.102986] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:42.733 [2024-12-06 18:09:08.106378] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:42.733 [2024-12-06 18:09:08.106587] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:42.733 [2024-12-06 18:09:08.106646] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:42.733 [2024-12-06 18:09:08.106671] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:42.733 { 00:10:42.733 "results": [ 00:10:42.733 { 00:10:42.733 "job": "raid_bdev1", 00:10:42.733 "core_mask": "0x1", 00:10:42.733 "workload": "randrw", 00:10:42.733 "percentage": 50, 00:10:42.733 "status": "finished", 00:10:42.733 "queue_depth": 1, 00:10:42.733 "io_size": 131072, 00:10:42.733 "runtime": 1.425399, 00:10:42.733 "iops": 10491.097580396787, 00:10:42.733 "mibps": 1311.3871975495983, 00:10:42.733 "io_failed": 1, 00:10:42.733 "io_timeout": 0, 00:10:42.733 "avg_latency_us": 132.4097811008784, 00:10:42.733 "min_latency_us": 40.96, 00:10:42.733 "max_latency_us": 1846.9236363636364 00:10:42.733 } 00:10:42.733 ], 00:10:42.733 "core_count": 1 00:10:42.733 } 00:10:42.733 18:09:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.733 18:09:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62446 00:10:42.733 18:09:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62446 ']' 00:10:42.733 18:09:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62446 00:10:42.733 18:09:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:42.733 18:09:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:42.733 18:09:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62446 00:10:42.733 killing process with pid 62446 00:10:42.733 18:09:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:42.733 18:09:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:42.733 18:09:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62446' 00:10:42.733 18:09:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62446 00:10:42.733 [2024-12-06 18:09:08.142256] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:42.733 18:09:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62446 00:10:42.992 [2024-12-06 18:09:08.264093] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:43.927 18:09:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Wox7TQNuak 00:10:43.927 18:09:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:43.927 18:09:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:43.927 18:09:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:10:43.927 18:09:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:43.927 18:09:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:43.927 18:09:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:43.927 18:09:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:10:43.927 00:10:43.927 real 0m4.629s 00:10:43.927 user 0m5.825s 00:10:43.927 sys 0m0.567s 00:10:43.927 18:09:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.927 18:09:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.927 ************************************ 00:10:43.927 END TEST raid_read_error_test 00:10:43.927 ************************************ 00:10:43.927 18:09:09 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:10:43.927 18:09:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:43.927 18:09:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.927 18:09:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:43.927 ************************************ 00:10:43.927 START TEST raid_write_error_test 00:10:43.927 ************************************ 00:10:43.927 18:09:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:10:43.927 18:09:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:43.927 18:09:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:43.927 18:09:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:43.927 18:09:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:43.927 18:09:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:43.927 18:09:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:43.927 18:09:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:43.927 18:09:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:43.927 18:09:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:43.927 18:09:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:43.927 18:09:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:43.927 18:09:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:43.927 18:09:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:43.927 18:09:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:43.927 18:09:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:43.927 18:09:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:43.927 18:09:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:43.927 18:09:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:43.927 18:09:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:43.927 18:09:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:43.927 18:09:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:43.927 18:09:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:43.927 18:09:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.1A8dfIkXhU 00:10:44.186 18:09:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62597 00:10:44.187 18:09:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62597 00:10:44.187 18:09:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62597 ']' 00:10:44.187 18:09:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:44.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:44.187 18:09:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:44.187 18:09:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:44.187 18:09:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:44.187 18:09:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:44.187 18:09:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.187 [2024-12-06 18:09:09.557237] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:10:44.187 [2024-12-06 18:09:09.557430] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62597 ] 00:10:44.445 [2024-12-06 18:09:09.741401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.445 [2024-12-06 18:09:09.871921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.704 [2024-12-06 18:09:10.079149] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:44.704 [2024-12-06 18:09:10.079244] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:45.273 18:09:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:45.273 18:09:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:45.273 18:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:45.273 18:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:45.273 18:09:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.273 18:09:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.273 BaseBdev1_malloc 00:10:45.273 18:09:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.273 18:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:45.273 18:09:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.273 18:09:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.273 true 00:10:45.273 18:09:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.273 18:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:45.273 18:09:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.273 18:09:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.273 [2024-12-06 18:09:10.567363] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:45.273 [2024-12-06 18:09:10.567582] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:45.273 [2024-12-06 18:09:10.567659] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:45.273 [2024-12-06 18:09:10.567815] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:45.273 [2024-12-06 18:09:10.570804] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:45.273 [2024-12-06 18:09:10.570979] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:45.273 BaseBdev1 00:10:45.273 18:09:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.273 18:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:45.273 18:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:45.273 18:09:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.273 18:09:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.273 BaseBdev2_malloc 00:10:45.273 18:09:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.273 18:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:45.273 18:09:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.273 18:09:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.273 true 00:10:45.273 18:09:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.273 18:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:45.273 18:09:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.273 18:09:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.273 [2024-12-06 18:09:10.636442] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:45.273 [2024-12-06 18:09:10.636511] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:45.273 [2024-12-06 18:09:10.636538] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:45.273 [2024-12-06 18:09:10.636555] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:45.273 [2024-12-06 18:09:10.639470] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:45.273 [2024-12-06 18:09:10.639695] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:45.273 BaseBdev2 00:10:45.273 18:09:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.273 18:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:45.273 18:09:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.273 18:09:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.273 [2024-12-06 18:09:10.644589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:45.273 [2024-12-06 18:09:10.647283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:45.273 [2024-12-06 18:09:10.647704] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:45.273 [2024-12-06 18:09:10.647863] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:45.273 [2024-12-06 18:09:10.648208] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:45.273 [2024-12-06 18:09:10.648579] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:45.273 [2024-12-06 18:09:10.648734] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:45.273 [2024-12-06 18:09:10.649144] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:45.273 18:09:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.273 18:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:45.273 18:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:45.273 18:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:45.273 18:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:45.273 18:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.273 18:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:45.273 18:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.273 18:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.273 18:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.273 18:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.273 18:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.273 18:09:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.273 18:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:45.273 18:09:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.273 18:09:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.273 18:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.273 "name": "raid_bdev1", 00:10:45.273 "uuid": "9ccb509e-6730-4ea6-847e-c3732a747c7f", 00:10:45.273 "strip_size_kb": 64, 00:10:45.273 "state": "online", 00:10:45.273 "raid_level": "concat", 00:10:45.273 "superblock": true, 00:10:45.273 "num_base_bdevs": 2, 00:10:45.273 "num_base_bdevs_discovered": 2, 00:10:45.273 "num_base_bdevs_operational": 2, 00:10:45.273 "base_bdevs_list": [ 00:10:45.273 { 00:10:45.273 "name": "BaseBdev1", 00:10:45.273 "uuid": "8f3db1bd-ae54-5b8b-9436-8dc73ea78da1", 00:10:45.273 "is_configured": true, 00:10:45.273 "data_offset": 2048, 00:10:45.273 "data_size": 63488 00:10:45.273 }, 00:10:45.273 { 00:10:45.273 "name": "BaseBdev2", 00:10:45.273 "uuid": "16646ef9-2a23-57ee-9896-19d9ff8615e8", 00:10:45.273 "is_configured": true, 00:10:45.273 "data_offset": 2048, 00:10:45.273 "data_size": 63488 00:10:45.273 } 00:10:45.273 ] 00:10:45.273 }' 00:10:45.273 18:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.273 18:09:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.842 18:09:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:45.842 18:09:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:45.843 [2024-12-06 18:09:11.314667] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:46.791 18:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:46.791 18:09:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.791 18:09:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.791 18:09:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.791 18:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:46.791 18:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:46.791 18:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:10:46.791 18:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:46.791 18:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:46.791 18:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:46.791 18:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:46.791 18:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.791 18:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:46.791 18:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.791 18:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.791 18:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.791 18:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.791 18:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.791 18:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:46.791 18:09:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.791 18:09:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.791 18:09:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.791 18:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.791 "name": "raid_bdev1", 00:10:46.791 "uuid": "9ccb509e-6730-4ea6-847e-c3732a747c7f", 00:10:46.791 "strip_size_kb": 64, 00:10:46.791 "state": "online", 00:10:46.791 "raid_level": "concat", 00:10:46.791 "superblock": true, 00:10:46.791 "num_base_bdevs": 2, 00:10:46.791 "num_base_bdevs_discovered": 2, 00:10:46.791 "num_base_bdevs_operational": 2, 00:10:46.791 "base_bdevs_list": [ 00:10:46.791 { 00:10:46.791 "name": "BaseBdev1", 00:10:46.791 "uuid": "8f3db1bd-ae54-5b8b-9436-8dc73ea78da1", 00:10:46.791 "is_configured": true, 00:10:46.791 "data_offset": 2048, 00:10:46.791 "data_size": 63488 00:10:46.791 }, 00:10:46.791 { 00:10:46.791 "name": "BaseBdev2", 00:10:46.791 "uuid": "16646ef9-2a23-57ee-9896-19d9ff8615e8", 00:10:46.791 "is_configured": true, 00:10:46.791 "data_offset": 2048, 00:10:46.791 "data_size": 63488 00:10:46.791 } 00:10:46.791 ] 00:10:46.791 }' 00:10:46.791 18:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.791 18:09:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.370 18:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:47.370 18:09:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.370 18:09:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.370 [2024-12-06 18:09:12.790269] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:47.370 [2024-12-06 18:09:12.790311] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:47.370 [2024-12-06 18:09:12.794086] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:47.370 [2024-12-06 18:09:12.794338] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:47.370 [2024-12-06 18:09:12.794426] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:47.370 [2024-12-06 18:09:12.794626] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:47.370 { 00:10:47.370 "results": [ 00:10:47.370 { 00:10:47.370 "job": "raid_bdev1", 00:10:47.370 "core_mask": "0x1", 00:10:47.370 "workload": "randrw", 00:10:47.370 "percentage": 50, 00:10:47.370 "status": "finished", 00:10:47.370 "queue_depth": 1, 00:10:47.371 "io_size": 131072, 00:10:47.371 "runtime": 1.473326, 00:10:47.371 "iops": 10360.911298653522, 00:10:47.371 "mibps": 1295.1139123316902, 00:10:47.371 "io_failed": 1, 00:10:47.371 "io_timeout": 0, 00:10:47.371 "avg_latency_us": 134.54607577147075, 00:10:47.371 "min_latency_us": 40.02909090909091, 00:10:47.371 "max_latency_us": 2055.447272727273 00:10:47.371 } 00:10:47.371 ], 00:10:47.371 "core_count": 1 00:10:47.371 } 00:10:47.371 18:09:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.371 18:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62597 00:10:47.371 18:09:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62597 ']' 00:10:47.371 18:09:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62597 00:10:47.371 18:09:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:47.371 18:09:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:47.371 18:09:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62597 00:10:47.371 killing process with pid 62597 00:10:47.371 18:09:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:47.371 18:09:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:47.371 18:09:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62597' 00:10:47.371 18:09:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62597 00:10:47.371 [2024-12-06 18:09:12.831065] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:47.371 18:09:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62597 00:10:47.630 [2024-12-06 18:09:12.954018] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:48.569 18:09:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.1A8dfIkXhU 00:10:48.569 18:09:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:48.569 18:09:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:48.569 18:09:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.68 00:10:48.569 18:09:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:48.569 18:09:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:48.569 18:09:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:48.569 18:09:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.68 != \0\.\0\0 ]] 00:10:48.569 00:10:48.569 real 0m4.652s 00:10:48.569 user 0m5.871s 00:10:48.569 sys 0m0.562s 00:10:48.569 18:09:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:48.569 ************************************ 00:10:48.569 END TEST raid_write_error_test 00:10:48.569 ************************************ 00:10:48.569 18:09:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.828 18:09:14 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:48.828 18:09:14 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:10:48.828 18:09:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:48.828 18:09:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:48.828 18:09:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:48.828 ************************************ 00:10:48.828 START TEST raid_state_function_test 00:10:48.828 ************************************ 00:10:48.828 18:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:10:48.828 18:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:48.828 18:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:48.828 18:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:48.828 18:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:48.828 18:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:48.828 18:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:48.828 18:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:48.828 18:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:48.828 18:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:48.828 18:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:48.828 18:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:48.828 18:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:48.828 18:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:48.828 18:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:48.828 18:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:48.828 18:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:48.828 18:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:48.828 18:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:48.828 18:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:48.828 18:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:48.828 18:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:48.828 18:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:48.828 Process raid pid: 62735 00:10:48.828 18:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62735 00:10:48.828 18:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:48.828 18:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62735' 00:10:48.828 18:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62735 00:10:48.828 18:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62735 ']' 00:10:48.828 18:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.829 18:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:48.829 18:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.829 18:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:48.829 18:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.829 [2024-12-06 18:09:14.238705] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:10:48.829 [2024-12-06 18:09:14.238902] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:49.087 [2024-12-06 18:09:14.415654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.087 [2024-12-06 18:09:14.556553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.347 [2024-12-06 18:09:14.764929] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:49.347 [2024-12-06 18:09:14.764983] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:49.915 18:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:49.915 18:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:49.915 18:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:49.915 18:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.915 18:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.915 [2024-12-06 18:09:15.279941] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:49.915 [2024-12-06 18:09:15.280010] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:49.915 [2024-12-06 18:09:15.280027] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:49.915 [2024-12-06 18:09:15.280044] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:49.915 18:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.915 18:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:49.915 18:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.915 18:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.915 18:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:49.915 18:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:49.915 18:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:49.915 18:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.916 18:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.916 18:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.916 18:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.916 18:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.916 18:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.916 18:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.916 18:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.916 18:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.916 18:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.916 "name": "Existed_Raid", 00:10:49.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.916 "strip_size_kb": 0, 00:10:49.916 "state": "configuring", 00:10:49.916 "raid_level": "raid1", 00:10:49.916 "superblock": false, 00:10:49.916 "num_base_bdevs": 2, 00:10:49.916 "num_base_bdevs_discovered": 0, 00:10:49.916 "num_base_bdevs_operational": 2, 00:10:49.916 "base_bdevs_list": [ 00:10:49.916 { 00:10:49.916 "name": "BaseBdev1", 00:10:49.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.916 "is_configured": false, 00:10:49.916 "data_offset": 0, 00:10:49.916 "data_size": 0 00:10:49.916 }, 00:10:49.916 { 00:10:49.916 "name": "BaseBdev2", 00:10:49.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.916 "is_configured": false, 00:10:49.916 "data_offset": 0, 00:10:49.916 "data_size": 0 00:10:49.916 } 00:10:49.916 ] 00:10:49.916 }' 00:10:49.916 18:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.916 18:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.482 18:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:50.482 18:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.482 18:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.482 [2024-12-06 18:09:15.796073] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:50.482 [2024-12-06 18:09:15.797421] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:50.482 18:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.482 18:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:50.482 18:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.482 18:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.482 [2024-12-06 18:09:15.804046] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:50.482 [2024-12-06 18:09:15.804219] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:50.482 [2024-12-06 18:09:15.804245] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:50.482 [2024-12-06 18:09:15.804266] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:50.482 18:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.482 18:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:50.482 18:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.482 18:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.482 [2024-12-06 18:09:15.848988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:50.482 BaseBdev1 00:10:50.482 18:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.482 18:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:50.482 18:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:50.482 18:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:50.482 18:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:50.482 18:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:50.482 18:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:50.482 18:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:50.482 18:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.482 18:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.482 18:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.482 18:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:50.482 18:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.482 18:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.482 [ 00:10:50.482 { 00:10:50.482 "name": "BaseBdev1", 00:10:50.482 "aliases": [ 00:10:50.482 "8d7359e6-40a5-45ac-90f6-4531ca189062" 00:10:50.482 ], 00:10:50.482 "product_name": "Malloc disk", 00:10:50.482 "block_size": 512, 00:10:50.482 "num_blocks": 65536, 00:10:50.482 "uuid": "8d7359e6-40a5-45ac-90f6-4531ca189062", 00:10:50.482 "assigned_rate_limits": { 00:10:50.482 "rw_ios_per_sec": 0, 00:10:50.482 "rw_mbytes_per_sec": 0, 00:10:50.482 "r_mbytes_per_sec": 0, 00:10:50.482 "w_mbytes_per_sec": 0 00:10:50.482 }, 00:10:50.482 "claimed": true, 00:10:50.482 "claim_type": "exclusive_write", 00:10:50.482 "zoned": false, 00:10:50.482 "supported_io_types": { 00:10:50.482 "read": true, 00:10:50.482 "write": true, 00:10:50.482 "unmap": true, 00:10:50.482 "flush": true, 00:10:50.482 "reset": true, 00:10:50.482 "nvme_admin": false, 00:10:50.482 "nvme_io": false, 00:10:50.482 "nvme_io_md": false, 00:10:50.482 "write_zeroes": true, 00:10:50.482 "zcopy": true, 00:10:50.482 "get_zone_info": false, 00:10:50.482 "zone_management": false, 00:10:50.482 "zone_append": false, 00:10:50.482 "compare": false, 00:10:50.482 "compare_and_write": false, 00:10:50.482 "abort": true, 00:10:50.482 "seek_hole": false, 00:10:50.482 "seek_data": false, 00:10:50.482 "copy": true, 00:10:50.482 "nvme_iov_md": false 00:10:50.482 }, 00:10:50.482 "memory_domains": [ 00:10:50.482 { 00:10:50.482 "dma_device_id": "system", 00:10:50.482 "dma_device_type": 1 00:10:50.482 }, 00:10:50.482 { 00:10:50.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.482 "dma_device_type": 2 00:10:50.482 } 00:10:50.482 ], 00:10:50.482 "driver_specific": {} 00:10:50.482 } 00:10:50.482 ] 00:10:50.482 18:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.482 18:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:50.482 18:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:50.482 18:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.482 18:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.482 18:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:50.482 18:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:50.482 18:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:50.482 18:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.482 18:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.482 18:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.482 18:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.482 18:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.482 18:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.482 18:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.482 18:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.482 18:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.482 18:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.482 "name": "Existed_Raid", 00:10:50.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.482 "strip_size_kb": 0, 00:10:50.482 "state": "configuring", 00:10:50.483 "raid_level": "raid1", 00:10:50.483 "superblock": false, 00:10:50.483 "num_base_bdevs": 2, 00:10:50.483 "num_base_bdevs_discovered": 1, 00:10:50.483 "num_base_bdevs_operational": 2, 00:10:50.483 "base_bdevs_list": [ 00:10:50.483 { 00:10:50.483 "name": "BaseBdev1", 00:10:50.483 "uuid": "8d7359e6-40a5-45ac-90f6-4531ca189062", 00:10:50.483 "is_configured": true, 00:10:50.483 "data_offset": 0, 00:10:50.483 "data_size": 65536 00:10:50.483 }, 00:10:50.483 { 00:10:50.483 "name": "BaseBdev2", 00:10:50.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.483 "is_configured": false, 00:10:50.483 "data_offset": 0, 00:10:50.483 "data_size": 0 00:10:50.483 } 00:10:50.483 ] 00:10:50.483 }' 00:10:50.483 18:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.483 18:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.055 18:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:51.055 18:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.055 18:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.055 [2024-12-06 18:09:16.417546] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:51.055 [2024-12-06 18:09:16.417604] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:51.055 18:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.055 18:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:51.055 18:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.055 18:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.055 [2024-12-06 18:09:16.425559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:51.055 [2024-12-06 18:09:16.428062] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:51.055 [2024-12-06 18:09:16.428244] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:51.055 18:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.055 18:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:51.055 18:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:51.055 18:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:51.055 18:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.055 18:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.055 18:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:51.055 18:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:51.055 18:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:51.055 18:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.055 18:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.055 18:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.055 18:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.055 18:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.055 18:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.055 18:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.055 18:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.055 18:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.055 18:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.055 "name": "Existed_Raid", 00:10:51.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.055 "strip_size_kb": 0, 00:10:51.055 "state": "configuring", 00:10:51.055 "raid_level": "raid1", 00:10:51.055 "superblock": false, 00:10:51.055 "num_base_bdevs": 2, 00:10:51.055 "num_base_bdevs_discovered": 1, 00:10:51.055 "num_base_bdevs_operational": 2, 00:10:51.055 "base_bdevs_list": [ 00:10:51.055 { 00:10:51.055 "name": "BaseBdev1", 00:10:51.055 "uuid": "8d7359e6-40a5-45ac-90f6-4531ca189062", 00:10:51.055 "is_configured": true, 00:10:51.055 "data_offset": 0, 00:10:51.055 "data_size": 65536 00:10:51.055 }, 00:10:51.055 { 00:10:51.055 "name": "BaseBdev2", 00:10:51.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.055 "is_configured": false, 00:10:51.055 "data_offset": 0, 00:10:51.055 "data_size": 0 00:10:51.055 } 00:10:51.055 ] 00:10:51.055 }' 00:10:51.055 18:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.055 18:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.632 18:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:51.632 18:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.632 18:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.632 [2024-12-06 18:09:17.011510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:51.632 [2024-12-06 18:09:17.011757] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:51.632 [2024-12-06 18:09:17.011810] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:51.632 [2024-12-06 18:09:17.012146] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:51.632 [2024-12-06 18:09:17.012370] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:51.632 [2024-12-06 18:09:17.012392] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:51.632 [2024-12-06 18:09:17.012706] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:51.632 BaseBdev2 00:10:51.632 18:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.632 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:51.632 18:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:51.632 18:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:51.632 18:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:51.633 18:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:51.633 18:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:51.633 18:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:51.633 18:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.633 18:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.633 18:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.633 18:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:51.633 18:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.633 18:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.633 [ 00:10:51.633 { 00:10:51.633 "name": "BaseBdev2", 00:10:51.633 "aliases": [ 00:10:51.633 "a364d1d7-e383-4e97-b096-6370ca450636" 00:10:51.633 ], 00:10:51.633 "product_name": "Malloc disk", 00:10:51.633 "block_size": 512, 00:10:51.633 "num_blocks": 65536, 00:10:51.633 "uuid": "a364d1d7-e383-4e97-b096-6370ca450636", 00:10:51.633 "assigned_rate_limits": { 00:10:51.633 "rw_ios_per_sec": 0, 00:10:51.633 "rw_mbytes_per_sec": 0, 00:10:51.633 "r_mbytes_per_sec": 0, 00:10:51.633 "w_mbytes_per_sec": 0 00:10:51.633 }, 00:10:51.633 "claimed": true, 00:10:51.633 "claim_type": "exclusive_write", 00:10:51.633 "zoned": false, 00:10:51.633 "supported_io_types": { 00:10:51.633 "read": true, 00:10:51.633 "write": true, 00:10:51.633 "unmap": true, 00:10:51.633 "flush": true, 00:10:51.633 "reset": true, 00:10:51.633 "nvme_admin": false, 00:10:51.633 "nvme_io": false, 00:10:51.633 "nvme_io_md": false, 00:10:51.633 "write_zeroes": true, 00:10:51.633 "zcopy": true, 00:10:51.633 "get_zone_info": false, 00:10:51.633 "zone_management": false, 00:10:51.633 "zone_append": false, 00:10:51.633 "compare": false, 00:10:51.633 "compare_and_write": false, 00:10:51.633 "abort": true, 00:10:51.633 "seek_hole": false, 00:10:51.633 "seek_data": false, 00:10:51.633 "copy": true, 00:10:51.633 "nvme_iov_md": false 00:10:51.633 }, 00:10:51.633 "memory_domains": [ 00:10:51.633 { 00:10:51.633 "dma_device_id": "system", 00:10:51.633 "dma_device_type": 1 00:10:51.633 }, 00:10:51.633 { 00:10:51.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.633 "dma_device_type": 2 00:10:51.633 } 00:10:51.633 ], 00:10:51.633 "driver_specific": {} 00:10:51.633 } 00:10:51.633 ] 00:10:51.633 18:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.633 18:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:51.633 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:51.633 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:51.633 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:51.633 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.633 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:51.633 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:51.633 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:51.633 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:51.633 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.633 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.633 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.633 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.633 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.633 18:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.633 18:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.633 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.633 18:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.633 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.633 "name": "Existed_Raid", 00:10:51.633 "uuid": "4aef962f-4655-4efa-8cc0-4764d10771fd", 00:10:51.633 "strip_size_kb": 0, 00:10:51.633 "state": "online", 00:10:51.633 "raid_level": "raid1", 00:10:51.633 "superblock": false, 00:10:51.633 "num_base_bdevs": 2, 00:10:51.633 "num_base_bdevs_discovered": 2, 00:10:51.633 "num_base_bdevs_operational": 2, 00:10:51.633 "base_bdevs_list": [ 00:10:51.633 { 00:10:51.633 "name": "BaseBdev1", 00:10:51.633 "uuid": "8d7359e6-40a5-45ac-90f6-4531ca189062", 00:10:51.633 "is_configured": true, 00:10:51.633 "data_offset": 0, 00:10:51.633 "data_size": 65536 00:10:51.633 }, 00:10:51.633 { 00:10:51.633 "name": "BaseBdev2", 00:10:51.633 "uuid": "a364d1d7-e383-4e97-b096-6370ca450636", 00:10:51.633 "is_configured": true, 00:10:51.633 "data_offset": 0, 00:10:51.633 "data_size": 65536 00:10:51.633 } 00:10:51.633 ] 00:10:51.633 }' 00:10:51.633 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.633 18:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.200 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:52.200 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:52.200 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:52.200 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:52.200 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:52.200 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:52.200 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:52.200 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:52.200 18:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.200 18:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.200 [2024-12-06 18:09:17.572092] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:52.200 18:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.200 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:52.200 "name": "Existed_Raid", 00:10:52.200 "aliases": [ 00:10:52.200 "4aef962f-4655-4efa-8cc0-4764d10771fd" 00:10:52.200 ], 00:10:52.200 "product_name": "Raid Volume", 00:10:52.200 "block_size": 512, 00:10:52.200 "num_blocks": 65536, 00:10:52.200 "uuid": "4aef962f-4655-4efa-8cc0-4764d10771fd", 00:10:52.200 "assigned_rate_limits": { 00:10:52.200 "rw_ios_per_sec": 0, 00:10:52.200 "rw_mbytes_per_sec": 0, 00:10:52.200 "r_mbytes_per_sec": 0, 00:10:52.200 "w_mbytes_per_sec": 0 00:10:52.200 }, 00:10:52.200 "claimed": false, 00:10:52.200 "zoned": false, 00:10:52.200 "supported_io_types": { 00:10:52.200 "read": true, 00:10:52.200 "write": true, 00:10:52.200 "unmap": false, 00:10:52.200 "flush": false, 00:10:52.200 "reset": true, 00:10:52.200 "nvme_admin": false, 00:10:52.200 "nvme_io": false, 00:10:52.200 "nvme_io_md": false, 00:10:52.200 "write_zeroes": true, 00:10:52.200 "zcopy": false, 00:10:52.200 "get_zone_info": false, 00:10:52.200 "zone_management": false, 00:10:52.200 "zone_append": false, 00:10:52.200 "compare": false, 00:10:52.200 "compare_and_write": false, 00:10:52.200 "abort": false, 00:10:52.200 "seek_hole": false, 00:10:52.200 "seek_data": false, 00:10:52.200 "copy": false, 00:10:52.200 "nvme_iov_md": false 00:10:52.200 }, 00:10:52.200 "memory_domains": [ 00:10:52.200 { 00:10:52.200 "dma_device_id": "system", 00:10:52.200 "dma_device_type": 1 00:10:52.200 }, 00:10:52.200 { 00:10:52.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.200 "dma_device_type": 2 00:10:52.200 }, 00:10:52.200 { 00:10:52.200 "dma_device_id": "system", 00:10:52.200 "dma_device_type": 1 00:10:52.200 }, 00:10:52.200 { 00:10:52.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.200 "dma_device_type": 2 00:10:52.200 } 00:10:52.200 ], 00:10:52.200 "driver_specific": { 00:10:52.200 "raid": { 00:10:52.200 "uuid": "4aef962f-4655-4efa-8cc0-4764d10771fd", 00:10:52.200 "strip_size_kb": 0, 00:10:52.200 "state": "online", 00:10:52.200 "raid_level": "raid1", 00:10:52.200 "superblock": false, 00:10:52.200 "num_base_bdevs": 2, 00:10:52.200 "num_base_bdevs_discovered": 2, 00:10:52.200 "num_base_bdevs_operational": 2, 00:10:52.200 "base_bdevs_list": [ 00:10:52.200 { 00:10:52.200 "name": "BaseBdev1", 00:10:52.200 "uuid": "8d7359e6-40a5-45ac-90f6-4531ca189062", 00:10:52.200 "is_configured": true, 00:10:52.200 "data_offset": 0, 00:10:52.200 "data_size": 65536 00:10:52.200 }, 00:10:52.200 { 00:10:52.200 "name": "BaseBdev2", 00:10:52.200 "uuid": "a364d1d7-e383-4e97-b096-6370ca450636", 00:10:52.200 "is_configured": true, 00:10:52.200 "data_offset": 0, 00:10:52.200 "data_size": 65536 00:10:52.200 } 00:10:52.200 ] 00:10:52.200 } 00:10:52.200 } 00:10:52.200 }' 00:10:52.200 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:52.200 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:52.200 BaseBdev2' 00:10:52.200 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.459 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:52.459 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.459 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:52.459 18:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.459 18:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.459 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.459 18:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.459 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:52.459 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:52.459 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.459 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:52.459 18:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.459 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.459 18:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.459 18:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.459 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:52.459 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:52.459 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:52.459 18:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.459 18:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.459 [2024-12-06 18:09:17.851849] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:52.459 18:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.459 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:52.459 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:52.459 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:52.459 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:52.459 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:52.459 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:10:52.459 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.459 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:52.459 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:52.459 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:52.459 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:52.459 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.459 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.459 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.459 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.459 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.459 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.459 18:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.459 18:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.459 18:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.717 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.717 "name": "Existed_Raid", 00:10:52.717 "uuid": "4aef962f-4655-4efa-8cc0-4764d10771fd", 00:10:52.717 "strip_size_kb": 0, 00:10:52.717 "state": "online", 00:10:52.717 "raid_level": "raid1", 00:10:52.717 "superblock": false, 00:10:52.717 "num_base_bdevs": 2, 00:10:52.717 "num_base_bdevs_discovered": 1, 00:10:52.717 "num_base_bdevs_operational": 1, 00:10:52.717 "base_bdevs_list": [ 00:10:52.717 { 00:10:52.717 "name": null, 00:10:52.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.717 "is_configured": false, 00:10:52.717 "data_offset": 0, 00:10:52.717 "data_size": 65536 00:10:52.717 }, 00:10:52.717 { 00:10:52.717 "name": "BaseBdev2", 00:10:52.717 "uuid": "a364d1d7-e383-4e97-b096-6370ca450636", 00:10:52.717 "is_configured": true, 00:10:52.717 "data_offset": 0, 00:10:52.717 "data_size": 65536 00:10:52.717 } 00:10:52.717 ] 00:10:52.717 }' 00:10:52.717 18:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.717 18:09:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.975 18:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:52.975 18:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:52.975 18:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.975 18:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.975 18:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:52.975 18:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.976 18:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.233 18:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:53.233 18:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:53.233 18:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:53.233 18:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.233 18:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.233 [2024-12-06 18:09:18.502913] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:53.233 [2024-12-06 18:09:18.503036] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:53.233 [2024-12-06 18:09:18.587073] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:53.233 [2024-12-06 18:09:18.587365] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:53.233 [2024-12-06 18:09:18.587400] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:53.233 18:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.233 18:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:53.233 18:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:53.233 18:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.233 18:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.233 18:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.233 18:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:53.233 18:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.233 18:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:53.233 18:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:53.233 18:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:53.233 18:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62735 00:10:53.233 18:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62735 ']' 00:10:53.233 18:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62735 00:10:53.233 18:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:53.233 18:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:53.233 18:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62735 00:10:53.233 killing process with pid 62735 00:10:53.233 18:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:53.233 18:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:53.233 18:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62735' 00:10:53.233 18:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62735 00:10:53.233 [2024-12-06 18:09:18.678564] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:53.234 18:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62735 00:10:53.234 [2024-12-06 18:09:18.693679] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:54.607 18:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:54.607 00:10:54.607 real 0m5.616s 00:10:54.607 user 0m8.526s 00:10:54.607 sys 0m0.770s 00:10:54.607 18:09:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:54.607 ************************************ 00:10:54.607 END TEST raid_state_function_test 00:10:54.607 ************************************ 00:10:54.607 18:09:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.607 18:09:19 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:10:54.607 18:09:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:54.607 18:09:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:54.607 18:09:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:54.607 ************************************ 00:10:54.607 START TEST raid_state_function_test_sb 00:10:54.607 ************************************ 00:10:54.607 18:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:10:54.607 18:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:54.607 18:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:54.607 18:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:54.607 18:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:54.607 18:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:54.607 18:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:54.607 18:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:54.607 18:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:54.607 18:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:54.607 18:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:54.607 18:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:54.607 18:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:54.607 18:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:54.607 18:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:54.607 18:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:54.607 18:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:54.607 18:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:54.607 18:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:54.607 18:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:54.607 18:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:54.607 18:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:54.607 18:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:54.607 Process raid pid: 62994 00:10:54.607 18:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62994 00:10:54.607 18:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62994' 00:10:54.607 18:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62994 00:10:54.607 18:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62994 ']' 00:10:54.607 18:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:54.607 18:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.607 18:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:54.607 18:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.607 18:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:54.607 18:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.607 [2024-12-06 18:09:19.923493] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:10:54.607 [2024-12-06 18:09:19.923682] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:54.607 [2024-12-06 18:09:20.109237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.864 [2024-12-06 18:09:20.241405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.122 [2024-12-06 18:09:20.448840] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:55.122 [2024-12-06 18:09:20.448896] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:55.688 18:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:55.688 18:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:55.688 18:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:55.688 18:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.688 18:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.688 [2024-12-06 18:09:20.970284] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:55.688 [2024-12-06 18:09:20.970385] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:55.688 [2024-12-06 18:09:20.970403] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:55.688 [2024-12-06 18:09:20.970435] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:55.688 18:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.688 18:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:55.688 18:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.688 18:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.688 18:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:55.688 18:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:55.689 18:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:55.689 18:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.689 18:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.689 18:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.689 18:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.689 18:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.689 18:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.689 18:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.689 18:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.689 18:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.689 18:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.689 "name": "Existed_Raid", 00:10:55.689 "uuid": "42006483-6de2-4899-b3ef-255f59099011", 00:10:55.689 "strip_size_kb": 0, 00:10:55.689 "state": "configuring", 00:10:55.689 "raid_level": "raid1", 00:10:55.689 "superblock": true, 00:10:55.689 "num_base_bdevs": 2, 00:10:55.689 "num_base_bdevs_discovered": 0, 00:10:55.689 "num_base_bdevs_operational": 2, 00:10:55.689 "base_bdevs_list": [ 00:10:55.689 { 00:10:55.689 "name": "BaseBdev1", 00:10:55.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.689 "is_configured": false, 00:10:55.689 "data_offset": 0, 00:10:55.689 "data_size": 0 00:10:55.689 }, 00:10:55.689 { 00:10:55.689 "name": "BaseBdev2", 00:10:55.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.689 "is_configured": false, 00:10:55.689 "data_offset": 0, 00:10:55.689 "data_size": 0 00:10:55.689 } 00:10:55.689 ] 00:10:55.689 }' 00:10:55.689 18:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.689 18:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.270 18:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:56.270 18:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.270 18:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.270 [2024-12-06 18:09:21.490366] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:56.270 [2024-12-06 18:09:21.490408] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:56.270 18:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.270 18:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:56.270 18:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.270 18:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.270 [2024-12-06 18:09:21.502359] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:56.270 [2024-12-06 18:09:21.502592] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:56.270 [2024-12-06 18:09:21.502619] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:56.270 [2024-12-06 18:09:21.502640] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:56.270 18:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.270 18:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:56.270 18:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.270 18:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.270 [2024-12-06 18:09:21.547830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:56.270 BaseBdev1 00:10:56.270 18:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.270 18:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:56.270 18:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:56.270 18:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:56.270 18:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:56.270 18:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:56.270 18:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:56.270 18:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:56.270 18:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.270 18:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.270 18:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.270 18:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:56.270 18:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.270 18:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.270 [ 00:10:56.270 { 00:10:56.270 "name": "BaseBdev1", 00:10:56.270 "aliases": [ 00:10:56.270 "957a4284-7b88-4bce-8e74-d79bf7859e4c" 00:10:56.270 ], 00:10:56.270 "product_name": "Malloc disk", 00:10:56.270 "block_size": 512, 00:10:56.270 "num_blocks": 65536, 00:10:56.270 "uuid": "957a4284-7b88-4bce-8e74-d79bf7859e4c", 00:10:56.270 "assigned_rate_limits": { 00:10:56.270 "rw_ios_per_sec": 0, 00:10:56.270 "rw_mbytes_per_sec": 0, 00:10:56.270 "r_mbytes_per_sec": 0, 00:10:56.270 "w_mbytes_per_sec": 0 00:10:56.270 }, 00:10:56.270 "claimed": true, 00:10:56.270 "claim_type": "exclusive_write", 00:10:56.270 "zoned": false, 00:10:56.270 "supported_io_types": { 00:10:56.270 "read": true, 00:10:56.270 "write": true, 00:10:56.270 "unmap": true, 00:10:56.270 "flush": true, 00:10:56.270 "reset": true, 00:10:56.270 "nvme_admin": false, 00:10:56.270 "nvme_io": false, 00:10:56.270 "nvme_io_md": false, 00:10:56.270 "write_zeroes": true, 00:10:56.270 "zcopy": true, 00:10:56.270 "get_zone_info": false, 00:10:56.270 "zone_management": false, 00:10:56.270 "zone_append": false, 00:10:56.270 "compare": false, 00:10:56.270 "compare_and_write": false, 00:10:56.270 "abort": true, 00:10:56.270 "seek_hole": false, 00:10:56.270 "seek_data": false, 00:10:56.270 "copy": true, 00:10:56.270 "nvme_iov_md": false 00:10:56.270 }, 00:10:56.270 "memory_domains": [ 00:10:56.270 { 00:10:56.270 "dma_device_id": "system", 00:10:56.270 "dma_device_type": 1 00:10:56.270 }, 00:10:56.270 { 00:10:56.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.270 "dma_device_type": 2 00:10:56.270 } 00:10:56.270 ], 00:10:56.270 "driver_specific": {} 00:10:56.270 } 00:10:56.270 ] 00:10:56.270 18:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.270 18:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:56.270 18:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:56.270 18:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.270 18:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.270 18:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:56.270 18:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:56.270 18:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:56.270 18:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.270 18:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.270 18:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.270 18:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.270 18:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.270 18:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.270 18:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.270 18:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.270 18:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.270 18:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.270 "name": "Existed_Raid", 00:10:56.270 "uuid": "650541bc-29e6-4358-a25f-a13e042e9a21", 00:10:56.270 "strip_size_kb": 0, 00:10:56.270 "state": "configuring", 00:10:56.270 "raid_level": "raid1", 00:10:56.270 "superblock": true, 00:10:56.270 "num_base_bdevs": 2, 00:10:56.270 "num_base_bdevs_discovered": 1, 00:10:56.270 "num_base_bdevs_operational": 2, 00:10:56.270 "base_bdevs_list": [ 00:10:56.270 { 00:10:56.270 "name": "BaseBdev1", 00:10:56.270 "uuid": "957a4284-7b88-4bce-8e74-d79bf7859e4c", 00:10:56.270 "is_configured": true, 00:10:56.270 "data_offset": 2048, 00:10:56.270 "data_size": 63488 00:10:56.270 }, 00:10:56.270 { 00:10:56.270 "name": "BaseBdev2", 00:10:56.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.270 "is_configured": false, 00:10:56.270 "data_offset": 0, 00:10:56.270 "data_size": 0 00:10:56.270 } 00:10:56.270 ] 00:10:56.270 }' 00:10:56.270 18:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.270 18:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.839 18:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:56.839 18:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.839 18:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.839 [2024-12-06 18:09:22.080033] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:56.839 [2024-12-06 18:09:22.080222] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:56.839 18:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.839 18:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:56.839 18:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.839 18:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.839 [2024-12-06 18:09:22.088066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:56.839 [2024-12-06 18:09:22.090439] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:56.839 [2024-12-06 18:09:22.090499] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:56.839 18:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.839 18:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:56.839 18:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:56.839 18:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:56.839 18:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.839 18:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.839 18:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:56.839 18:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:56.840 18:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:56.840 18:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.840 18:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.840 18:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.840 18:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.840 18:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.840 18:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.840 18:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.840 18:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.840 18:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.840 18:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.840 "name": "Existed_Raid", 00:10:56.840 "uuid": "f85f65e9-b499-4ca8-9961-5c466684c855", 00:10:56.840 "strip_size_kb": 0, 00:10:56.840 "state": "configuring", 00:10:56.840 "raid_level": "raid1", 00:10:56.840 "superblock": true, 00:10:56.840 "num_base_bdevs": 2, 00:10:56.840 "num_base_bdevs_discovered": 1, 00:10:56.840 "num_base_bdevs_operational": 2, 00:10:56.840 "base_bdevs_list": [ 00:10:56.840 { 00:10:56.840 "name": "BaseBdev1", 00:10:56.840 "uuid": "957a4284-7b88-4bce-8e74-d79bf7859e4c", 00:10:56.840 "is_configured": true, 00:10:56.840 "data_offset": 2048, 00:10:56.840 "data_size": 63488 00:10:56.840 }, 00:10:56.840 { 00:10:56.840 "name": "BaseBdev2", 00:10:56.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.840 "is_configured": false, 00:10:56.840 "data_offset": 0, 00:10:56.840 "data_size": 0 00:10:56.840 } 00:10:56.840 ] 00:10:56.840 }' 00:10:56.840 18:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.840 18:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.099 18:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:57.099 18:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.099 18:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.359 [2024-12-06 18:09:22.654551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:57.359 [2024-12-06 18:09:22.654907] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:57.359 [2024-12-06 18:09:22.654927] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:57.359 BaseBdev2 00:10:57.359 [2024-12-06 18:09:22.655261] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:57.359 [2024-12-06 18:09:22.655476] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:57.360 [2024-12-06 18:09:22.655501] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:57.360 [2024-12-06 18:09:22.655686] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:57.360 18:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.360 18:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:57.360 18:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:57.360 18:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:57.360 18:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:57.360 18:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:57.360 18:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:57.360 18:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:57.360 18:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.360 18:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.360 18:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.360 18:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:57.360 18:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.360 18:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.360 [ 00:10:57.360 { 00:10:57.360 "name": "BaseBdev2", 00:10:57.360 "aliases": [ 00:10:57.360 "08eb97cb-a7af-4cb1-8434-180b76a4aa9a" 00:10:57.360 ], 00:10:57.360 "product_name": "Malloc disk", 00:10:57.360 "block_size": 512, 00:10:57.360 "num_blocks": 65536, 00:10:57.360 "uuid": "08eb97cb-a7af-4cb1-8434-180b76a4aa9a", 00:10:57.360 "assigned_rate_limits": { 00:10:57.360 "rw_ios_per_sec": 0, 00:10:57.360 "rw_mbytes_per_sec": 0, 00:10:57.360 "r_mbytes_per_sec": 0, 00:10:57.360 "w_mbytes_per_sec": 0 00:10:57.360 }, 00:10:57.360 "claimed": true, 00:10:57.360 "claim_type": "exclusive_write", 00:10:57.360 "zoned": false, 00:10:57.360 "supported_io_types": { 00:10:57.360 "read": true, 00:10:57.360 "write": true, 00:10:57.360 "unmap": true, 00:10:57.360 "flush": true, 00:10:57.360 "reset": true, 00:10:57.360 "nvme_admin": false, 00:10:57.360 "nvme_io": false, 00:10:57.360 "nvme_io_md": false, 00:10:57.360 "write_zeroes": true, 00:10:57.360 "zcopy": true, 00:10:57.360 "get_zone_info": false, 00:10:57.360 "zone_management": false, 00:10:57.360 "zone_append": false, 00:10:57.360 "compare": false, 00:10:57.360 "compare_and_write": false, 00:10:57.360 "abort": true, 00:10:57.360 "seek_hole": false, 00:10:57.360 "seek_data": false, 00:10:57.360 "copy": true, 00:10:57.360 "nvme_iov_md": false 00:10:57.360 }, 00:10:57.360 "memory_domains": [ 00:10:57.360 { 00:10:57.360 "dma_device_id": "system", 00:10:57.360 "dma_device_type": 1 00:10:57.360 }, 00:10:57.360 { 00:10:57.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.360 "dma_device_type": 2 00:10:57.360 } 00:10:57.360 ], 00:10:57.360 "driver_specific": {} 00:10:57.360 } 00:10:57.360 ] 00:10:57.360 18:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.360 18:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:57.360 18:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:57.360 18:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:57.360 18:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:57.360 18:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.360 18:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:57.360 18:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:57.360 18:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:57.360 18:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:57.360 18:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.360 18:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.360 18:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.360 18:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.360 18:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.360 18:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.360 18:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.360 18:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.360 18:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.360 18:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.360 "name": "Existed_Raid", 00:10:57.360 "uuid": "f85f65e9-b499-4ca8-9961-5c466684c855", 00:10:57.360 "strip_size_kb": 0, 00:10:57.360 "state": "online", 00:10:57.360 "raid_level": "raid1", 00:10:57.360 "superblock": true, 00:10:57.360 "num_base_bdevs": 2, 00:10:57.360 "num_base_bdevs_discovered": 2, 00:10:57.360 "num_base_bdevs_operational": 2, 00:10:57.360 "base_bdevs_list": [ 00:10:57.360 { 00:10:57.360 "name": "BaseBdev1", 00:10:57.360 "uuid": "957a4284-7b88-4bce-8e74-d79bf7859e4c", 00:10:57.360 "is_configured": true, 00:10:57.360 "data_offset": 2048, 00:10:57.360 "data_size": 63488 00:10:57.360 }, 00:10:57.360 { 00:10:57.360 "name": "BaseBdev2", 00:10:57.360 "uuid": "08eb97cb-a7af-4cb1-8434-180b76a4aa9a", 00:10:57.360 "is_configured": true, 00:10:57.360 "data_offset": 2048, 00:10:57.360 "data_size": 63488 00:10:57.360 } 00:10:57.360 ] 00:10:57.360 }' 00:10:57.360 18:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.360 18:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.929 18:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:57.929 18:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:57.929 18:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:57.929 18:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:57.929 18:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:57.929 18:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:57.929 18:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:57.929 18:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.929 18:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.929 18:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:57.929 [2024-12-06 18:09:23.223164] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:57.929 18:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.929 18:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:57.929 "name": "Existed_Raid", 00:10:57.929 "aliases": [ 00:10:57.929 "f85f65e9-b499-4ca8-9961-5c466684c855" 00:10:57.929 ], 00:10:57.929 "product_name": "Raid Volume", 00:10:57.929 "block_size": 512, 00:10:57.929 "num_blocks": 63488, 00:10:57.929 "uuid": "f85f65e9-b499-4ca8-9961-5c466684c855", 00:10:57.929 "assigned_rate_limits": { 00:10:57.929 "rw_ios_per_sec": 0, 00:10:57.929 "rw_mbytes_per_sec": 0, 00:10:57.929 "r_mbytes_per_sec": 0, 00:10:57.929 "w_mbytes_per_sec": 0 00:10:57.929 }, 00:10:57.929 "claimed": false, 00:10:57.929 "zoned": false, 00:10:57.929 "supported_io_types": { 00:10:57.929 "read": true, 00:10:57.929 "write": true, 00:10:57.929 "unmap": false, 00:10:57.929 "flush": false, 00:10:57.929 "reset": true, 00:10:57.929 "nvme_admin": false, 00:10:57.929 "nvme_io": false, 00:10:57.929 "nvme_io_md": false, 00:10:57.929 "write_zeroes": true, 00:10:57.929 "zcopy": false, 00:10:57.929 "get_zone_info": false, 00:10:57.929 "zone_management": false, 00:10:57.929 "zone_append": false, 00:10:57.929 "compare": false, 00:10:57.929 "compare_and_write": false, 00:10:57.929 "abort": false, 00:10:57.929 "seek_hole": false, 00:10:57.929 "seek_data": false, 00:10:57.929 "copy": false, 00:10:57.929 "nvme_iov_md": false 00:10:57.929 }, 00:10:57.929 "memory_domains": [ 00:10:57.929 { 00:10:57.929 "dma_device_id": "system", 00:10:57.929 "dma_device_type": 1 00:10:57.929 }, 00:10:57.929 { 00:10:57.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.929 "dma_device_type": 2 00:10:57.929 }, 00:10:57.929 { 00:10:57.929 "dma_device_id": "system", 00:10:57.929 "dma_device_type": 1 00:10:57.929 }, 00:10:57.929 { 00:10:57.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.929 "dma_device_type": 2 00:10:57.929 } 00:10:57.929 ], 00:10:57.929 "driver_specific": { 00:10:57.929 "raid": { 00:10:57.929 "uuid": "f85f65e9-b499-4ca8-9961-5c466684c855", 00:10:57.929 "strip_size_kb": 0, 00:10:57.929 "state": "online", 00:10:57.929 "raid_level": "raid1", 00:10:57.929 "superblock": true, 00:10:57.929 "num_base_bdevs": 2, 00:10:57.929 "num_base_bdevs_discovered": 2, 00:10:57.929 "num_base_bdevs_operational": 2, 00:10:57.929 "base_bdevs_list": [ 00:10:57.929 { 00:10:57.929 "name": "BaseBdev1", 00:10:57.929 "uuid": "957a4284-7b88-4bce-8e74-d79bf7859e4c", 00:10:57.929 "is_configured": true, 00:10:57.929 "data_offset": 2048, 00:10:57.929 "data_size": 63488 00:10:57.929 }, 00:10:57.929 { 00:10:57.929 "name": "BaseBdev2", 00:10:57.929 "uuid": "08eb97cb-a7af-4cb1-8434-180b76a4aa9a", 00:10:57.929 "is_configured": true, 00:10:57.929 "data_offset": 2048, 00:10:57.929 "data_size": 63488 00:10:57.929 } 00:10:57.929 ] 00:10:57.929 } 00:10:57.929 } 00:10:57.929 }' 00:10:57.929 18:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:57.929 18:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:57.929 BaseBdev2' 00:10:57.929 18:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.929 18:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:57.929 18:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.929 18:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.929 18:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:57.929 18:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.929 18:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.930 18:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.930 18:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.930 18:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.930 18:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.930 18:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:57.930 18:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.930 18:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.930 18:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.930 18:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.189 18:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:58.189 18:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:58.189 18:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:58.189 18:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.189 18:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.189 [2024-12-06 18:09:23.470927] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:58.189 18:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.189 18:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:58.189 18:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:58.189 18:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:58.189 18:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:58.189 18:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:58.190 18:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:10:58.190 18:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.190 18:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:58.190 18:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:58.190 18:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:58.190 18:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:58.190 18:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.190 18:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.190 18:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.190 18:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.190 18:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.190 18:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.190 18:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.190 18:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.190 18:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.190 18:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.190 "name": "Existed_Raid", 00:10:58.190 "uuid": "f85f65e9-b499-4ca8-9961-5c466684c855", 00:10:58.190 "strip_size_kb": 0, 00:10:58.190 "state": "online", 00:10:58.190 "raid_level": "raid1", 00:10:58.190 "superblock": true, 00:10:58.190 "num_base_bdevs": 2, 00:10:58.190 "num_base_bdevs_discovered": 1, 00:10:58.190 "num_base_bdevs_operational": 1, 00:10:58.190 "base_bdevs_list": [ 00:10:58.190 { 00:10:58.190 "name": null, 00:10:58.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.190 "is_configured": false, 00:10:58.190 "data_offset": 0, 00:10:58.190 "data_size": 63488 00:10:58.190 }, 00:10:58.190 { 00:10:58.190 "name": "BaseBdev2", 00:10:58.190 "uuid": "08eb97cb-a7af-4cb1-8434-180b76a4aa9a", 00:10:58.190 "is_configured": true, 00:10:58.190 "data_offset": 2048, 00:10:58.190 "data_size": 63488 00:10:58.190 } 00:10:58.190 ] 00:10:58.190 }' 00:10:58.190 18:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.190 18:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.759 18:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:58.759 18:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:58.759 18:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.759 18:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:58.759 18:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.759 18:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.759 18:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.759 18:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:58.759 18:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:58.760 18:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:58.760 18:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.760 18:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.760 [2024-12-06 18:09:24.123861] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:58.760 [2024-12-06 18:09:24.123988] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:58.760 [2024-12-06 18:09:24.209431] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:58.760 [2024-12-06 18:09:24.209499] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:58.760 [2024-12-06 18:09:24.209520] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:58.760 18:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.760 18:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:58.760 18:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:58.760 18:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.760 18:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.760 18:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.760 18:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:58.760 18:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.760 18:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:58.760 18:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:58.760 18:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:58.760 18:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62994 00:10:58.760 18:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62994 ']' 00:10:58.760 18:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62994 00:10:58.760 18:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:58.760 18:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:58.760 18:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62994 00:10:59.019 killing process with pid 62994 00:10:59.019 18:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:59.019 18:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:59.019 18:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62994' 00:10:59.019 18:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62994 00:10:59.019 [2024-12-06 18:09:24.306273] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:59.019 18:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62994 00:10:59.019 [2024-12-06 18:09:24.321185] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:59.956 18:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:59.956 00:10:59.956 real 0m5.561s 00:10:59.956 user 0m8.420s 00:10:59.956 sys 0m0.788s 00:10:59.956 18:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:59.956 ************************************ 00:10:59.956 END TEST raid_state_function_test_sb 00:10:59.956 ************************************ 00:10:59.956 18:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.956 18:09:25 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:10:59.956 18:09:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:59.956 18:09:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:59.956 18:09:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:59.956 ************************************ 00:10:59.956 START TEST raid_superblock_test 00:10:59.956 ************************************ 00:10:59.956 18:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:10:59.956 18:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:59.956 18:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:10:59.956 18:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:59.956 18:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:59.956 18:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:59.956 18:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:59.956 18:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:59.956 18:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:59.956 18:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:59.956 18:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:59.956 18:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:59.956 18:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:59.956 18:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:59.956 18:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:59.956 18:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:59.956 18:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63246 00:10:59.956 18:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63246 00:10:59.956 18:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:59.956 18:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63246 ']' 00:10:59.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:59.956 18:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:59.956 18:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:59.956 18:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:59.956 18:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:59.956 18:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.214 [2024-12-06 18:09:25.537283] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:11:00.214 [2024-12-06 18:09:25.537461] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63246 ] 00:11:00.214 [2024-12-06 18:09:25.721835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.472 [2024-12-06 18:09:25.847849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.731 [2024-12-06 18:09:26.049436] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:00.731 [2024-12-06 18:09:26.049503] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:00.989 18:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:00.989 18:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:00.989 18:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:00.989 18:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:00.989 18:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:00.989 18:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:00.989 18:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:00.989 18:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:00.989 18:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:00.989 18:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:00.989 18:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:00.989 18:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.989 18:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.247 malloc1 00:11:01.247 18:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.247 18:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:01.247 18:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.247 18:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.247 [2024-12-06 18:09:26.550170] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:01.247 [2024-12-06 18:09:26.550376] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.247 [2024-12-06 18:09:26.550454] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:01.247 [2024-12-06 18:09:26.550683] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.247 [2024-12-06 18:09:26.553473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.248 [2024-12-06 18:09:26.553642] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:01.248 pt1 00:11:01.248 18:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.248 18:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:01.248 18:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:01.248 18:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:01.248 18:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:01.248 18:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:01.248 18:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:01.248 18:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:01.248 18:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:01.248 18:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:01.248 18:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.248 18:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.248 malloc2 00:11:01.248 18:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.248 18:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:01.248 18:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.248 18:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.248 [2024-12-06 18:09:26.606259] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:01.248 [2024-12-06 18:09:26.606452] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.248 [2024-12-06 18:09:26.606534] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:01.248 [2024-12-06 18:09:26.606652] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.248 [2024-12-06 18:09:26.609596] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.248 [2024-12-06 18:09:26.609744] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:01.248 pt2 00:11:01.248 18:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.248 18:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:01.248 18:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:01.248 18:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:11:01.248 18:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.248 18:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.248 [2024-12-06 18:09:26.618495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:01.248 [2024-12-06 18:09:26.620910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:01.248 [2024-12-06 18:09:26.621132] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:01.248 [2024-12-06 18:09:26.621156] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:01.248 [2024-12-06 18:09:26.621462] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:01.248 [2024-12-06 18:09:26.621663] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:01.248 [2024-12-06 18:09:26.621688] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:01.248 [2024-12-06 18:09:26.621890] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:01.248 18:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.248 18:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:01.248 18:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:01.248 18:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:01.248 18:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:01.248 18:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:01.248 18:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:01.248 18:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.248 18:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.248 18:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.248 18:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.248 18:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.248 18:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.248 18:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:01.248 18:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.248 18:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.248 18:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.248 "name": "raid_bdev1", 00:11:01.248 "uuid": "85e1b0d2-2452-4450-9a48-8f722e8eaded", 00:11:01.248 "strip_size_kb": 0, 00:11:01.248 "state": "online", 00:11:01.248 "raid_level": "raid1", 00:11:01.248 "superblock": true, 00:11:01.248 "num_base_bdevs": 2, 00:11:01.248 "num_base_bdevs_discovered": 2, 00:11:01.248 "num_base_bdevs_operational": 2, 00:11:01.248 "base_bdevs_list": [ 00:11:01.248 { 00:11:01.248 "name": "pt1", 00:11:01.248 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:01.248 "is_configured": true, 00:11:01.248 "data_offset": 2048, 00:11:01.248 "data_size": 63488 00:11:01.248 }, 00:11:01.248 { 00:11:01.248 "name": "pt2", 00:11:01.248 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:01.248 "is_configured": true, 00:11:01.248 "data_offset": 2048, 00:11:01.248 "data_size": 63488 00:11:01.248 } 00:11:01.248 ] 00:11:01.248 }' 00:11:01.248 18:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.248 18:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.815 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:01.815 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:01.815 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:01.815 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:01.815 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:01.815 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:01.815 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:01.815 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:01.815 18:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.815 18:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.815 [2024-12-06 18:09:27.119018] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:01.815 18:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.815 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:01.815 "name": "raid_bdev1", 00:11:01.815 "aliases": [ 00:11:01.815 "85e1b0d2-2452-4450-9a48-8f722e8eaded" 00:11:01.815 ], 00:11:01.815 "product_name": "Raid Volume", 00:11:01.815 "block_size": 512, 00:11:01.815 "num_blocks": 63488, 00:11:01.815 "uuid": "85e1b0d2-2452-4450-9a48-8f722e8eaded", 00:11:01.815 "assigned_rate_limits": { 00:11:01.815 "rw_ios_per_sec": 0, 00:11:01.815 "rw_mbytes_per_sec": 0, 00:11:01.815 "r_mbytes_per_sec": 0, 00:11:01.815 "w_mbytes_per_sec": 0 00:11:01.815 }, 00:11:01.815 "claimed": false, 00:11:01.815 "zoned": false, 00:11:01.815 "supported_io_types": { 00:11:01.815 "read": true, 00:11:01.815 "write": true, 00:11:01.815 "unmap": false, 00:11:01.815 "flush": false, 00:11:01.815 "reset": true, 00:11:01.815 "nvme_admin": false, 00:11:01.815 "nvme_io": false, 00:11:01.815 "nvme_io_md": false, 00:11:01.815 "write_zeroes": true, 00:11:01.815 "zcopy": false, 00:11:01.815 "get_zone_info": false, 00:11:01.815 "zone_management": false, 00:11:01.815 "zone_append": false, 00:11:01.815 "compare": false, 00:11:01.815 "compare_and_write": false, 00:11:01.815 "abort": false, 00:11:01.815 "seek_hole": false, 00:11:01.815 "seek_data": false, 00:11:01.815 "copy": false, 00:11:01.815 "nvme_iov_md": false 00:11:01.815 }, 00:11:01.815 "memory_domains": [ 00:11:01.815 { 00:11:01.815 "dma_device_id": "system", 00:11:01.815 "dma_device_type": 1 00:11:01.815 }, 00:11:01.815 { 00:11:01.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.815 "dma_device_type": 2 00:11:01.815 }, 00:11:01.815 { 00:11:01.815 "dma_device_id": "system", 00:11:01.815 "dma_device_type": 1 00:11:01.815 }, 00:11:01.815 { 00:11:01.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.815 "dma_device_type": 2 00:11:01.815 } 00:11:01.815 ], 00:11:01.815 "driver_specific": { 00:11:01.815 "raid": { 00:11:01.815 "uuid": "85e1b0d2-2452-4450-9a48-8f722e8eaded", 00:11:01.815 "strip_size_kb": 0, 00:11:01.815 "state": "online", 00:11:01.815 "raid_level": "raid1", 00:11:01.815 "superblock": true, 00:11:01.815 "num_base_bdevs": 2, 00:11:01.815 "num_base_bdevs_discovered": 2, 00:11:01.815 "num_base_bdevs_operational": 2, 00:11:01.815 "base_bdevs_list": [ 00:11:01.815 { 00:11:01.815 "name": "pt1", 00:11:01.815 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:01.815 "is_configured": true, 00:11:01.815 "data_offset": 2048, 00:11:01.815 "data_size": 63488 00:11:01.815 }, 00:11:01.815 { 00:11:01.815 "name": "pt2", 00:11:01.815 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:01.815 "is_configured": true, 00:11:01.815 "data_offset": 2048, 00:11:01.815 "data_size": 63488 00:11:01.815 } 00:11:01.815 ] 00:11:01.815 } 00:11:01.815 } 00:11:01.815 }' 00:11:01.815 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:01.815 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:01.815 pt2' 00:11:01.815 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.815 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:01.815 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.815 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:01.815 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.815 18:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.815 18:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.815 18:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.075 [2024-12-06 18:09:27.395046] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=85e1b0d2-2452-4450-9a48-8f722e8eaded 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 85e1b0d2-2452-4450-9a48-8f722e8eaded ']' 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.075 [2024-12-06 18:09:27.442693] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:02.075 [2024-12-06 18:09:27.442734] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:02.075 [2024-12-06 18:09:27.442855] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:02.075 [2024-12-06 18:09:27.442933] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:02.075 [2024-12-06 18:09:27.442954] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.075 [2024-12-06 18:09:27.582798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:02.075 [2024-12-06 18:09:27.585406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:02.075 [2024-12-06 18:09:27.585493] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:02.075 [2024-12-06 18:09:27.585567] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:02.075 [2024-12-06 18:09:27.585595] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:02.075 [2024-12-06 18:09:27.585611] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:02.075 request: 00:11:02.075 { 00:11:02.075 "name": "raid_bdev1", 00:11:02.075 "raid_level": "raid1", 00:11:02.075 "base_bdevs": [ 00:11:02.075 "malloc1", 00:11:02.075 "malloc2" 00:11:02.075 ], 00:11:02.075 "superblock": false, 00:11:02.075 "method": "bdev_raid_create", 00:11:02.075 "req_id": 1 00:11:02.075 } 00:11:02.075 Got JSON-RPC error response 00:11:02.075 response: 00:11:02.075 { 00:11:02.075 "code": -17, 00:11:02.075 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:02.075 } 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.075 18:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.335 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:02.335 18:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.335 18:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.335 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:02.335 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:02.335 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:02.335 18:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.335 18:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.335 [2024-12-06 18:09:27.646800] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:02.335 [2024-12-06 18:09:27.646986] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.335 [2024-12-06 18:09:27.647176] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:02.335 [2024-12-06 18:09:27.647315] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.335 [2024-12-06 18:09:27.650282] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.335 [2024-12-06 18:09:27.650447] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:02.335 [2024-12-06 18:09:27.650702] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:02.335 [2024-12-06 18:09:27.650913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:02.335 pt1 00:11:02.335 18:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.335 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:11:02.335 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:02.335 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.335 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:02.335 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:02.335 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:02.335 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.335 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.335 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.335 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.335 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.335 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:02.335 18:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.335 18:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.335 18:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.335 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.335 "name": "raid_bdev1", 00:11:02.335 "uuid": "85e1b0d2-2452-4450-9a48-8f722e8eaded", 00:11:02.335 "strip_size_kb": 0, 00:11:02.335 "state": "configuring", 00:11:02.335 "raid_level": "raid1", 00:11:02.335 "superblock": true, 00:11:02.335 "num_base_bdevs": 2, 00:11:02.335 "num_base_bdevs_discovered": 1, 00:11:02.335 "num_base_bdevs_operational": 2, 00:11:02.335 "base_bdevs_list": [ 00:11:02.335 { 00:11:02.335 "name": "pt1", 00:11:02.335 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:02.335 "is_configured": true, 00:11:02.335 "data_offset": 2048, 00:11:02.335 "data_size": 63488 00:11:02.335 }, 00:11:02.335 { 00:11:02.335 "name": null, 00:11:02.335 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:02.335 "is_configured": false, 00:11:02.335 "data_offset": 2048, 00:11:02.335 "data_size": 63488 00:11:02.335 } 00:11:02.335 ] 00:11:02.335 }' 00:11:02.335 18:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.335 18:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.903 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:11:02.903 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:02.903 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:02.903 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:02.903 18:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.903 18:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.903 [2024-12-06 18:09:28.158978] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:02.903 [2024-12-06 18:09:28.159205] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.903 [2024-12-06 18:09:28.159247] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:02.903 [2024-12-06 18:09:28.159266] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.903 [2024-12-06 18:09:28.159842] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.903 [2024-12-06 18:09:28.159879] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:02.903 [2024-12-06 18:09:28.159978] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:02.903 [2024-12-06 18:09:28.160018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:02.903 [2024-12-06 18:09:28.160172] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:02.903 [2024-12-06 18:09:28.160200] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:02.903 [2024-12-06 18:09:28.160506] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:02.903 [2024-12-06 18:09:28.160692] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:02.903 [2024-12-06 18:09:28.160714] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:02.904 [2024-12-06 18:09:28.161062] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:02.904 pt2 00:11:02.904 18:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.904 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:02.904 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:02.904 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:02.904 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:02.904 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:02.904 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:02.904 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:02.904 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:02.904 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.904 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.904 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.904 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.904 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.904 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:02.904 18:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.904 18:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.904 18:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.904 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.904 "name": "raid_bdev1", 00:11:02.904 "uuid": "85e1b0d2-2452-4450-9a48-8f722e8eaded", 00:11:02.904 "strip_size_kb": 0, 00:11:02.904 "state": "online", 00:11:02.904 "raid_level": "raid1", 00:11:02.904 "superblock": true, 00:11:02.904 "num_base_bdevs": 2, 00:11:02.904 "num_base_bdevs_discovered": 2, 00:11:02.904 "num_base_bdevs_operational": 2, 00:11:02.904 "base_bdevs_list": [ 00:11:02.904 { 00:11:02.904 "name": "pt1", 00:11:02.904 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:02.904 "is_configured": true, 00:11:02.904 "data_offset": 2048, 00:11:02.904 "data_size": 63488 00:11:02.904 }, 00:11:02.904 { 00:11:02.904 "name": "pt2", 00:11:02.904 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:02.904 "is_configured": true, 00:11:02.904 "data_offset": 2048, 00:11:02.904 "data_size": 63488 00:11:02.904 } 00:11:02.904 ] 00:11:02.904 }' 00:11:02.904 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.904 18:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.163 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:03.163 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:03.163 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:03.163 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:03.163 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:03.163 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:03.163 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:03.163 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:03.163 18:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.163 18:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.163 [2024-12-06 18:09:28.675408] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:03.422 18:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.422 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:03.422 "name": "raid_bdev1", 00:11:03.422 "aliases": [ 00:11:03.422 "85e1b0d2-2452-4450-9a48-8f722e8eaded" 00:11:03.422 ], 00:11:03.422 "product_name": "Raid Volume", 00:11:03.422 "block_size": 512, 00:11:03.422 "num_blocks": 63488, 00:11:03.422 "uuid": "85e1b0d2-2452-4450-9a48-8f722e8eaded", 00:11:03.422 "assigned_rate_limits": { 00:11:03.422 "rw_ios_per_sec": 0, 00:11:03.422 "rw_mbytes_per_sec": 0, 00:11:03.422 "r_mbytes_per_sec": 0, 00:11:03.422 "w_mbytes_per_sec": 0 00:11:03.422 }, 00:11:03.422 "claimed": false, 00:11:03.422 "zoned": false, 00:11:03.422 "supported_io_types": { 00:11:03.422 "read": true, 00:11:03.422 "write": true, 00:11:03.422 "unmap": false, 00:11:03.422 "flush": false, 00:11:03.422 "reset": true, 00:11:03.422 "nvme_admin": false, 00:11:03.422 "nvme_io": false, 00:11:03.422 "nvme_io_md": false, 00:11:03.422 "write_zeroes": true, 00:11:03.422 "zcopy": false, 00:11:03.422 "get_zone_info": false, 00:11:03.422 "zone_management": false, 00:11:03.422 "zone_append": false, 00:11:03.422 "compare": false, 00:11:03.422 "compare_and_write": false, 00:11:03.422 "abort": false, 00:11:03.422 "seek_hole": false, 00:11:03.422 "seek_data": false, 00:11:03.422 "copy": false, 00:11:03.422 "nvme_iov_md": false 00:11:03.422 }, 00:11:03.422 "memory_domains": [ 00:11:03.422 { 00:11:03.422 "dma_device_id": "system", 00:11:03.422 "dma_device_type": 1 00:11:03.422 }, 00:11:03.422 { 00:11:03.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.422 "dma_device_type": 2 00:11:03.422 }, 00:11:03.422 { 00:11:03.422 "dma_device_id": "system", 00:11:03.422 "dma_device_type": 1 00:11:03.422 }, 00:11:03.422 { 00:11:03.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.422 "dma_device_type": 2 00:11:03.422 } 00:11:03.422 ], 00:11:03.422 "driver_specific": { 00:11:03.422 "raid": { 00:11:03.422 "uuid": "85e1b0d2-2452-4450-9a48-8f722e8eaded", 00:11:03.422 "strip_size_kb": 0, 00:11:03.422 "state": "online", 00:11:03.422 "raid_level": "raid1", 00:11:03.422 "superblock": true, 00:11:03.422 "num_base_bdevs": 2, 00:11:03.422 "num_base_bdevs_discovered": 2, 00:11:03.422 "num_base_bdevs_operational": 2, 00:11:03.422 "base_bdevs_list": [ 00:11:03.422 { 00:11:03.422 "name": "pt1", 00:11:03.422 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:03.422 "is_configured": true, 00:11:03.422 "data_offset": 2048, 00:11:03.422 "data_size": 63488 00:11:03.422 }, 00:11:03.422 { 00:11:03.422 "name": "pt2", 00:11:03.422 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:03.422 "is_configured": true, 00:11:03.422 "data_offset": 2048, 00:11:03.422 "data_size": 63488 00:11:03.422 } 00:11:03.422 ] 00:11:03.422 } 00:11:03.422 } 00:11:03.422 }' 00:11:03.422 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:03.422 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:03.422 pt2' 00:11:03.422 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.422 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:03.422 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.422 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:03.422 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.422 18:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.422 18:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.422 18:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.422 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.422 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.422 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.422 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:03.422 18:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.422 18:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.422 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.422 18:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.422 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.422 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.422 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:03.422 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:03.422 18:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.422 18:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.422 [2024-12-06 18:09:28.935430] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:03.681 18:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.681 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 85e1b0d2-2452-4450-9a48-8f722e8eaded '!=' 85e1b0d2-2452-4450-9a48-8f722e8eaded ']' 00:11:03.681 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:03.681 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:03.681 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:03.681 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:03.681 18:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.681 18:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.681 [2024-12-06 18:09:28.987215] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:03.681 18:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.681 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:03.681 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:03.681 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:03.681 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:03.681 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:03.681 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:03.681 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.681 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.681 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.681 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.682 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.682 18:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.682 18:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:03.682 18:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.682 18:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.682 18:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.682 "name": "raid_bdev1", 00:11:03.682 "uuid": "85e1b0d2-2452-4450-9a48-8f722e8eaded", 00:11:03.682 "strip_size_kb": 0, 00:11:03.682 "state": "online", 00:11:03.682 "raid_level": "raid1", 00:11:03.682 "superblock": true, 00:11:03.682 "num_base_bdevs": 2, 00:11:03.682 "num_base_bdevs_discovered": 1, 00:11:03.682 "num_base_bdevs_operational": 1, 00:11:03.682 "base_bdevs_list": [ 00:11:03.682 { 00:11:03.682 "name": null, 00:11:03.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.682 "is_configured": false, 00:11:03.682 "data_offset": 0, 00:11:03.682 "data_size": 63488 00:11:03.682 }, 00:11:03.682 { 00:11:03.682 "name": "pt2", 00:11:03.682 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:03.682 "is_configured": true, 00:11:03.682 "data_offset": 2048, 00:11:03.682 "data_size": 63488 00:11:03.682 } 00:11:03.682 ] 00:11:03.682 }' 00:11:03.682 18:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.682 18:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.250 18:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:04.251 18:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.251 18:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.251 [2024-12-06 18:09:29.511380] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:04.251 [2024-12-06 18:09:29.511567] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:04.251 [2024-12-06 18:09:29.511683] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:04.251 [2024-12-06 18:09:29.511749] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:04.251 [2024-12-06 18:09:29.511795] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:04.251 18:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.251 18:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.251 18:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:04.251 18:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.251 18:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.251 18:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.251 18:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:04.251 18:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:04.251 18:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:04.251 18:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:04.251 18:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:04.251 18:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.251 18:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.251 18:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.251 18:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:04.251 18:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:04.251 18:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:04.251 18:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:04.251 18:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:11:04.251 18:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:04.251 18:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.251 18:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.251 [2024-12-06 18:09:29.587361] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:04.251 [2024-12-06 18:09:29.587430] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.251 [2024-12-06 18:09:29.587454] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:04.251 [2024-12-06 18:09:29.587471] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.251 [2024-12-06 18:09:29.590306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.251 [2024-12-06 18:09:29.590358] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:04.251 [2024-12-06 18:09:29.590454] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:04.251 [2024-12-06 18:09:29.590515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:04.251 [2024-12-06 18:09:29.590640] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:04.251 [2024-12-06 18:09:29.590663] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:04.251 [2024-12-06 18:09:29.590987] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:04.251 [2024-12-06 18:09:29.591236] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:04.251 [2024-12-06 18:09:29.591258] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:04.251 [2024-12-06 18:09:29.591480] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:04.251 pt2 00:11:04.251 18:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.251 18:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:04.251 18:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:04.251 18:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:04.251 18:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:04.251 18:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:04.251 18:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:04.251 18:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.251 18:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.251 18:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.251 18:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.251 18:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.251 18:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.251 18:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.251 18:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.251 18:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.251 18:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.251 "name": "raid_bdev1", 00:11:04.251 "uuid": "85e1b0d2-2452-4450-9a48-8f722e8eaded", 00:11:04.251 "strip_size_kb": 0, 00:11:04.251 "state": "online", 00:11:04.251 "raid_level": "raid1", 00:11:04.251 "superblock": true, 00:11:04.251 "num_base_bdevs": 2, 00:11:04.251 "num_base_bdevs_discovered": 1, 00:11:04.251 "num_base_bdevs_operational": 1, 00:11:04.251 "base_bdevs_list": [ 00:11:04.251 { 00:11:04.251 "name": null, 00:11:04.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.251 "is_configured": false, 00:11:04.251 "data_offset": 2048, 00:11:04.251 "data_size": 63488 00:11:04.251 }, 00:11:04.251 { 00:11:04.251 "name": "pt2", 00:11:04.251 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:04.251 "is_configured": true, 00:11:04.251 "data_offset": 2048, 00:11:04.251 "data_size": 63488 00:11:04.251 } 00:11:04.251 ] 00:11:04.251 }' 00:11:04.251 18:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.251 18:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.821 18:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:04.821 18:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.821 18:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.821 [2024-12-06 18:09:30.099521] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:04.821 [2024-12-06 18:09:30.099686] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:04.821 [2024-12-06 18:09:30.099808] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:04.821 [2024-12-06 18:09:30.099882] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:04.821 [2024-12-06 18:09:30.099898] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:04.821 18:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.821 18:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:04.821 18:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.821 18:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.821 18:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.821 18:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.821 18:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:04.821 18:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:04.821 18:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:11:04.821 18:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:04.821 18:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.821 18:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.821 [2024-12-06 18:09:30.163551] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:04.821 [2024-12-06 18:09:30.163743] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.821 [2024-12-06 18:09:30.163798] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:11:04.821 [2024-12-06 18:09:30.163815] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.821 [2024-12-06 18:09:30.166638] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.821 [2024-12-06 18:09:30.166684] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:04.821 [2024-12-06 18:09:30.166934] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:04.821 [2024-12-06 18:09:30.167032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:04.821 [2024-12-06 18:09:30.167231] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:04.821 [2024-12-06 18:09:30.167251] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:04.821 [2024-12-06 18:09:30.167274] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:11:04.821 [2024-12-06 18:09:30.167338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:04.821 [2024-12-06 18:09:30.167440] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:11:04.821 [2024-12-06 18:09:30.167456] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:04.821 [2024-12-06 18:09:30.167793] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:04.821 [2024-12-06 18:09:30.167983] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:11:04.821 [2024-12-06 18:09:30.168005] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:11:04.821 [2024-12-06 18:09:30.168228] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:04.821 pt1 00:11:04.821 18:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.821 18:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:11:04.821 18:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:04.821 18:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:04.821 18:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:04.821 18:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:04.821 18:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:04.821 18:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:04.821 18:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.821 18:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.821 18:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.821 18:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.821 18:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.821 18:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.821 18:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.821 18:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.821 18:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.821 18:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.821 "name": "raid_bdev1", 00:11:04.821 "uuid": "85e1b0d2-2452-4450-9a48-8f722e8eaded", 00:11:04.821 "strip_size_kb": 0, 00:11:04.821 "state": "online", 00:11:04.821 "raid_level": "raid1", 00:11:04.822 "superblock": true, 00:11:04.822 "num_base_bdevs": 2, 00:11:04.822 "num_base_bdevs_discovered": 1, 00:11:04.822 "num_base_bdevs_operational": 1, 00:11:04.822 "base_bdevs_list": [ 00:11:04.822 { 00:11:04.822 "name": null, 00:11:04.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.822 "is_configured": false, 00:11:04.822 "data_offset": 2048, 00:11:04.822 "data_size": 63488 00:11:04.822 }, 00:11:04.822 { 00:11:04.822 "name": "pt2", 00:11:04.822 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:04.822 "is_configured": true, 00:11:04.822 "data_offset": 2048, 00:11:04.822 "data_size": 63488 00:11:04.822 } 00:11:04.822 ] 00:11:04.822 }' 00:11:04.822 18:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.822 18:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.394 18:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:05.394 18:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:05.394 18:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.394 18:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.394 18:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.394 18:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:05.394 18:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:05.394 18:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:05.394 18:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.394 18:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.394 [2024-12-06 18:09:30.692590] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:05.394 18:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.394 18:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 85e1b0d2-2452-4450-9a48-8f722e8eaded '!=' 85e1b0d2-2452-4450-9a48-8f722e8eaded ']' 00:11:05.394 18:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63246 00:11:05.394 18:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63246 ']' 00:11:05.394 18:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63246 00:11:05.394 18:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:05.394 18:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:05.394 18:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63246 00:11:05.395 18:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:05.395 killing process with pid 63246 00:11:05.395 18:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:05.395 18:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63246' 00:11:05.395 18:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63246 00:11:05.395 [2024-12-06 18:09:30.775532] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:05.395 [2024-12-06 18:09:30.775633] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:05.395 18:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63246 00:11:05.395 [2024-12-06 18:09:30.775695] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:05.395 [2024-12-06 18:09:30.775718] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:11:05.654 [2024-12-06 18:09:30.954221] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:06.590 18:09:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:06.590 00:11:06.590 real 0m6.556s 00:11:06.590 user 0m10.384s 00:11:06.590 sys 0m0.922s 00:11:06.590 ************************************ 00:11:06.590 END TEST raid_superblock_test 00:11:06.590 ************************************ 00:11:06.590 18:09:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:06.590 18:09:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.590 18:09:32 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:11:06.590 18:09:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:06.590 18:09:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:06.590 18:09:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:06.590 ************************************ 00:11:06.590 START TEST raid_read_error_test 00:11:06.590 ************************************ 00:11:06.590 18:09:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:11:06.590 18:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:06.590 18:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:11:06.590 18:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:06.590 18:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:06.590 18:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:06.590 18:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:06.590 18:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:06.590 18:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:06.590 18:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:06.590 18:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:06.590 18:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:06.590 18:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:06.590 18:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:06.590 18:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:06.590 18:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:06.590 18:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:06.590 18:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:06.590 18:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:06.590 18:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:06.590 18:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:06.590 18:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:06.590 18:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.kk9IFhftj9 00:11:06.590 18:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63587 00:11:06.590 18:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63587 00:11:06.590 18:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:06.590 18:09:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63587 ']' 00:11:06.590 18:09:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.590 18:09:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:06.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.590 18:09:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.590 18:09:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:06.590 18:09:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.849 [2024-12-06 18:09:32.156792] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:11:06.849 [2024-12-06 18:09:32.156961] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63587 ] 00:11:06.849 [2024-12-06 18:09:32.342109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.107 [2024-12-06 18:09:32.471952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.366 [2024-12-06 18:09:32.673815] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:07.366 [2024-12-06 18:09:32.673893] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:07.932 18:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:07.932 18:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:07.932 18:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:07.932 18:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:07.932 18:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.932 18:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.932 BaseBdev1_malloc 00:11:07.932 18:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.932 18:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:07.932 18:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.932 18:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.932 true 00:11:07.932 18:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.932 18:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:07.932 18:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.932 18:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.932 [2024-12-06 18:09:33.225591] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:07.932 [2024-12-06 18:09:33.225672] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.932 [2024-12-06 18:09:33.225700] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:07.932 [2024-12-06 18:09:33.225719] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.932 [2024-12-06 18:09:33.228551] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.932 [2024-12-06 18:09:33.228616] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:07.932 BaseBdev1 00:11:07.932 18:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.932 18:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:07.932 18:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:07.932 18:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.932 18:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.932 BaseBdev2_malloc 00:11:07.932 18:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.932 18:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:07.932 18:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.932 18:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.932 true 00:11:07.932 18:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.932 18:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:07.932 18:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.932 18:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.932 [2024-12-06 18:09:33.281695] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:07.932 [2024-12-06 18:09:33.281761] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.932 [2024-12-06 18:09:33.281798] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:07.932 [2024-12-06 18:09:33.281817] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.932 [2024-12-06 18:09:33.284510] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.932 [2024-12-06 18:09:33.284561] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:07.932 BaseBdev2 00:11:07.932 18:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.932 18:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:11:07.932 18:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.932 18:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.932 [2024-12-06 18:09:33.289801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:07.933 [2024-12-06 18:09:33.292196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:07.933 [2024-12-06 18:09:33.292451] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:07.933 [2024-12-06 18:09:33.292475] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:07.933 [2024-12-06 18:09:33.292794] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:07.933 [2024-12-06 18:09:33.293025] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:07.933 [2024-12-06 18:09:33.293043] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:07.933 [2024-12-06 18:09:33.293234] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:07.933 18:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.933 18:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:07.933 18:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:07.933 18:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:07.933 18:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:07.933 18:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:07.933 18:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:07.933 18:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.933 18:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.933 18:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.933 18:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.933 18:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.933 18:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:07.933 18:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.933 18:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.933 18:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.933 18:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.933 "name": "raid_bdev1", 00:11:07.933 "uuid": "d41473e2-67a3-44b7-8ce4-f23d705924c2", 00:11:07.933 "strip_size_kb": 0, 00:11:07.933 "state": "online", 00:11:07.933 "raid_level": "raid1", 00:11:07.933 "superblock": true, 00:11:07.933 "num_base_bdevs": 2, 00:11:07.933 "num_base_bdevs_discovered": 2, 00:11:07.933 "num_base_bdevs_operational": 2, 00:11:07.933 "base_bdevs_list": [ 00:11:07.933 { 00:11:07.933 "name": "BaseBdev1", 00:11:07.933 "uuid": "fe9798f2-52b9-5865-b6b8-c56e6d244ee7", 00:11:07.933 "is_configured": true, 00:11:07.933 "data_offset": 2048, 00:11:07.933 "data_size": 63488 00:11:07.933 }, 00:11:07.933 { 00:11:07.933 "name": "BaseBdev2", 00:11:07.933 "uuid": "cdfa78c1-09cc-5c12-870d-9658df0fc118", 00:11:07.933 "is_configured": true, 00:11:07.933 "data_offset": 2048, 00:11:07.933 "data_size": 63488 00:11:07.933 } 00:11:07.933 ] 00:11:07.933 }' 00:11:07.933 18:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.933 18:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.497 18:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:08.497 18:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:08.497 [2024-12-06 18:09:33.947353] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:09.483 18:09:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:09.483 18:09:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.483 18:09:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.483 18:09:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.483 18:09:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:09.483 18:09:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:09.483 18:09:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:09.483 18:09:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:11:09.483 18:09:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:09.483 18:09:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:09.483 18:09:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:09.483 18:09:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:09.483 18:09:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:09.483 18:09:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:09.483 18:09:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.483 18:09:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.483 18:09:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.483 18:09:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.483 18:09:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.483 18:09:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.483 18:09:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.483 18:09:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.483 18:09:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.483 18:09:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.483 "name": "raid_bdev1", 00:11:09.483 "uuid": "d41473e2-67a3-44b7-8ce4-f23d705924c2", 00:11:09.483 "strip_size_kb": 0, 00:11:09.483 "state": "online", 00:11:09.483 "raid_level": "raid1", 00:11:09.483 "superblock": true, 00:11:09.483 "num_base_bdevs": 2, 00:11:09.483 "num_base_bdevs_discovered": 2, 00:11:09.483 "num_base_bdevs_operational": 2, 00:11:09.483 "base_bdevs_list": [ 00:11:09.483 { 00:11:09.483 "name": "BaseBdev1", 00:11:09.483 "uuid": "fe9798f2-52b9-5865-b6b8-c56e6d244ee7", 00:11:09.483 "is_configured": true, 00:11:09.483 "data_offset": 2048, 00:11:09.483 "data_size": 63488 00:11:09.483 }, 00:11:09.483 { 00:11:09.483 "name": "BaseBdev2", 00:11:09.483 "uuid": "cdfa78c1-09cc-5c12-870d-9658df0fc118", 00:11:09.483 "is_configured": true, 00:11:09.483 "data_offset": 2048, 00:11:09.483 "data_size": 63488 00:11:09.483 } 00:11:09.483 ] 00:11:09.483 }' 00:11:09.483 18:09:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.483 18:09:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.047 18:09:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:10.047 18:09:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.047 18:09:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.047 [2024-12-06 18:09:35.360166] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:10.047 [2024-12-06 18:09:35.360375] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:10.047 [2024-12-06 18:09:35.363931] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:10.047 { 00:11:10.047 "results": [ 00:11:10.047 { 00:11:10.047 "job": "raid_bdev1", 00:11:10.047 "core_mask": "0x1", 00:11:10.047 "workload": "randrw", 00:11:10.047 "percentage": 50, 00:11:10.047 "status": "finished", 00:11:10.047 "queue_depth": 1, 00:11:10.047 "io_size": 131072, 00:11:10.047 "runtime": 1.410794, 00:11:10.047 "iops": 12950.154310267835, 00:11:10.047 "mibps": 1618.7692887834794, 00:11:10.047 "io_failed": 0, 00:11:10.047 "io_timeout": 0, 00:11:10.048 "avg_latency_us": 72.98809414340448, 00:11:10.048 "min_latency_us": 41.192727272727275, 00:11:10.048 "max_latency_us": 1839.4763636363637 00:11:10.048 } 00:11:10.048 ], 00:11:10.048 "core_count": 1 00:11:10.048 } 00:11:10.048 [2024-12-06 18:09:35.364124] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:10.048 [2024-12-06 18:09:35.364303] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:10.048 [2024-12-06 18:09:35.364329] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:10.048 18:09:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.048 18:09:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63587 00:11:10.048 18:09:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63587 ']' 00:11:10.048 18:09:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63587 00:11:10.048 18:09:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:10.048 18:09:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:10.048 18:09:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63587 00:11:10.048 killing process with pid 63587 00:11:10.048 18:09:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:10.048 18:09:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:10.048 18:09:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63587' 00:11:10.048 18:09:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63587 00:11:10.048 [2024-12-06 18:09:35.402354] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:10.048 18:09:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63587 00:11:10.048 [2024-12-06 18:09:35.521459] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:11.422 18:09:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:11.422 18:09:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.kk9IFhftj9 00:11:11.422 18:09:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:11.422 18:09:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:11.422 18:09:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:11.422 18:09:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:11.422 18:09:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:11.422 18:09:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:11.422 00:11:11.422 real 0m4.557s 00:11:11.422 user 0m5.748s 00:11:11.422 sys 0m0.571s 00:11:11.422 18:09:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:11.422 ************************************ 00:11:11.422 END TEST raid_read_error_test 00:11:11.422 ************************************ 00:11:11.422 18:09:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.422 18:09:36 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:11:11.422 18:09:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:11.422 18:09:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:11.422 18:09:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:11.422 ************************************ 00:11:11.422 START TEST raid_write_error_test 00:11:11.422 ************************************ 00:11:11.422 18:09:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:11:11.422 18:09:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:11.422 18:09:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:11:11.422 18:09:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:11.422 18:09:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:11.422 18:09:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:11.422 18:09:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:11.422 18:09:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:11.422 18:09:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:11.422 18:09:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:11.422 18:09:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:11.422 18:09:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:11.422 18:09:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:11.422 18:09:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:11.422 18:09:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:11.422 18:09:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:11.422 18:09:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:11.422 18:09:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:11.422 18:09:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:11.422 18:09:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:11.422 18:09:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:11.422 18:09:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:11.422 18:09:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.2wCxOmRb6q 00:11:11.422 18:09:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63727 00:11:11.422 18:09:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63727 00:11:11.422 18:09:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:11.422 18:09:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63727 ']' 00:11:11.422 18:09:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.422 18:09:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:11.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.422 18:09:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.422 18:09:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:11.422 18:09:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.422 [2024-12-06 18:09:36.756010] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:11:11.422 [2024-12-06 18:09:36.756218] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63727 ] 00:11:11.680 [2024-12-06 18:09:36.941464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.680 [2024-12-06 18:09:37.097241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.938 [2024-12-06 18:09:37.337871] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:11.938 [2024-12-06 18:09:37.337912] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:12.504 18:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:12.504 18:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:12.504 18:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:12.504 18:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:12.504 18:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.504 18:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.504 BaseBdev1_malloc 00:11:12.504 18:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.504 18:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:12.504 18:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.504 18:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.504 true 00:11:12.504 18:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.504 18:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:12.504 18:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.504 18:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.504 [2024-12-06 18:09:37.834533] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:12.504 [2024-12-06 18:09:37.834616] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.504 [2024-12-06 18:09:37.834652] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:12.504 [2024-12-06 18:09:37.834672] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.504 [2024-12-06 18:09:37.837636] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.504 [2024-12-06 18:09:37.837693] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:12.504 BaseBdev1 00:11:12.504 18:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.504 18:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:12.504 18:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:12.504 18:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.504 18:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.504 BaseBdev2_malloc 00:11:12.504 18:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.504 18:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:12.504 18:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.504 18:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.504 true 00:11:12.504 18:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.504 18:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:12.504 18:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.504 18:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.504 [2024-12-06 18:09:37.891067] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:12.504 [2024-12-06 18:09:37.891145] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.504 [2024-12-06 18:09:37.891177] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:12.504 [2024-12-06 18:09:37.891195] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.504 [2024-12-06 18:09:37.894073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.504 [2024-12-06 18:09:37.894125] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:12.504 BaseBdev2 00:11:12.505 18:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.505 18:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:11:12.505 18:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.505 18:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.505 [2024-12-06 18:09:37.899156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:12.505 [2024-12-06 18:09:37.901646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:12.505 [2024-12-06 18:09:37.902093] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:12.505 [2024-12-06 18:09:37.902126] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:12.505 [2024-12-06 18:09:37.902474] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:12.505 [2024-12-06 18:09:37.902731] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:12.505 [2024-12-06 18:09:37.902749] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:12.505 [2024-12-06 18:09:37.903036] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:12.505 18:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.505 18:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:12.505 18:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:12.505 18:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:12.505 18:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:12.505 18:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:12.505 18:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:12.505 18:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.505 18:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.505 18:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.505 18:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.505 18:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.505 18:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.505 18:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.505 18:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:12.505 18:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.505 18:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.505 "name": "raid_bdev1", 00:11:12.505 "uuid": "04c6633e-afdf-4967-8fb0-512fe3cd96f3", 00:11:12.505 "strip_size_kb": 0, 00:11:12.505 "state": "online", 00:11:12.505 "raid_level": "raid1", 00:11:12.505 "superblock": true, 00:11:12.505 "num_base_bdevs": 2, 00:11:12.505 "num_base_bdevs_discovered": 2, 00:11:12.505 "num_base_bdevs_operational": 2, 00:11:12.505 "base_bdevs_list": [ 00:11:12.505 { 00:11:12.505 "name": "BaseBdev1", 00:11:12.505 "uuid": "6ca86322-7463-5ac6-addf-1bbf20e9a665", 00:11:12.505 "is_configured": true, 00:11:12.505 "data_offset": 2048, 00:11:12.505 "data_size": 63488 00:11:12.505 }, 00:11:12.505 { 00:11:12.505 "name": "BaseBdev2", 00:11:12.505 "uuid": "043ef3a6-afb4-5a24-a801-d0c12065d846", 00:11:12.505 "is_configured": true, 00:11:12.505 "data_offset": 2048, 00:11:12.505 "data_size": 63488 00:11:12.505 } 00:11:12.505 ] 00:11:12.505 }' 00:11:12.505 18:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.505 18:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.070 18:09:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:13.070 18:09:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:13.070 [2024-12-06 18:09:38.500744] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:14.004 18:09:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:14.004 18:09:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.004 18:09:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.004 [2024-12-06 18:09:39.397058] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:14.004 [2024-12-06 18:09:39.397289] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:14.004 [2024-12-06 18:09:39.397544] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:11:14.004 18:09:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.004 18:09:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:14.004 18:09:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:14.004 18:09:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:11:14.004 18:09:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:11:14.004 18:09:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:14.004 18:09:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:14.004 18:09:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:14.004 18:09:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:14.004 18:09:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:14.004 18:09:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:14.004 18:09:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.004 18:09:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.004 18:09:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.004 18:09:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.004 18:09:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.004 18:09:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.004 18:09:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.004 18:09:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:14.004 18:09:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.004 18:09:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.004 "name": "raid_bdev1", 00:11:14.004 "uuid": "04c6633e-afdf-4967-8fb0-512fe3cd96f3", 00:11:14.004 "strip_size_kb": 0, 00:11:14.004 "state": "online", 00:11:14.004 "raid_level": "raid1", 00:11:14.004 "superblock": true, 00:11:14.004 "num_base_bdevs": 2, 00:11:14.004 "num_base_bdevs_discovered": 1, 00:11:14.004 "num_base_bdevs_operational": 1, 00:11:14.004 "base_bdevs_list": [ 00:11:14.004 { 00:11:14.004 "name": null, 00:11:14.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.004 "is_configured": false, 00:11:14.004 "data_offset": 0, 00:11:14.004 "data_size": 63488 00:11:14.004 }, 00:11:14.004 { 00:11:14.004 "name": "BaseBdev2", 00:11:14.004 "uuid": "043ef3a6-afb4-5a24-a801-d0c12065d846", 00:11:14.004 "is_configured": true, 00:11:14.004 "data_offset": 2048, 00:11:14.004 "data_size": 63488 00:11:14.004 } 00:11:14.004 ] 00:11:14.004 }' 00:11:14.004 18:09:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.004 18:09:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.571 18:09:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:14.571 18:09:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.571 18:09:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.571 [2024-12-06 18:09:39.932132] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:14.571 [2024-12-06 18:09:39.932173] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:14.571 [2024-12-06 18:09:39.935449] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:14.571 [2024-12-06 18:09:39.935509] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:14.571 [2024-12-06 18:09:39.935591] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:14.571 [2024-12-06 18:09:39.935610] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:14.571 18:09:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.571 { 00:11:14.571 "results": [ 00:11:14.571 { 00:11:14.571 "job": "raid_bdev1", 00:11:14.571 "core_mask": "0x1", 00:11:14.571 "workload": "randrw", 00:11:14.571 "percentage": 50, 00:11:14.571 "status": "finished", 00:11:14.571 "queue_depth": 1, 00:11:14.571 "io_size": 131072, 00:11:14.571 "runtime": 1.42901, 00:11:14.571 "iops": 14281.215666790295, 00:11:14.571 "mibps": 1785.1519583487868, 00:11:14.571 "io_failed": 0, 00:11:14.571 "io_timeout": 0, 00:11:14.571 "avg_latency_us": 65.18734471330315, 00:11:14.571 "min_latency_us": 43.75272727272727, 00:11:14.571 "max_latency_us": 1817.1345454545456 00:11:14.571 } 00:11:14.571 ], 00:11:14.571 "core_count": 1 00:11:14.571 } 00:11:14.571 18:09:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63727 00:11:14.571 18:09:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63727 ']' 00:11:14.571 18:09:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63727 00:11:14.571 18:09:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:14.571 18:09:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:14.571 18:09:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63727 00:11:14.571 18:09:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:14.571 18:09:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:14.571 killing process with pid 63727 00:11:14.572 18:09:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63727' 00:11:14.572 18:09:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63727 00:11:14.572 [2024-12-06 18:09:39.970576] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:14.572 18:09:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63727 00:11:14.572 [2024-12-06 18:09:40.088754] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:15.945 18:09:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.2wCxOmRb6q 00:11:15.945 18:09:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:15.945 18:09:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:15.945 18:09:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:15.945 18:09:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:15.945 18:09:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:15.945 18:09:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:15.945 18:09:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:15.945 00:11:15.945 real 0m4.538s 00:11:15.945 user 0m5.692s 00:11:15.945 sys 0m0.551s 00:11:15.945 18:09:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:15.945 ************************************ 00:11:15.945 END TEST raid_write_error_test 00:11:15.945 ************************************ 00:11:15.945 18:09:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.945 18:09:41 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:11:15.945 18:09:41 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:15.945 18:09:41 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:11:15.946 18:09:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:15.946 18:09:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:15.946 18:09:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:15.946 ************************************ 00:11:15.946 START TEST raid_state_function_test 00:11:15.946 ************************************ 00:11:15.946 18:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:11:15.946 18:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:15.946 18:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:15.946 18:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:15.946 18:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:15.946 18:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:15.946 18:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:15.946 18:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:15.946 18:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:15.946 18:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:15.946 18:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:15.946 18:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:15.946 18:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:15.946 18:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:15.946 18:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:15.946 18:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:15.946 18:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:15.946 18:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:15.946 18:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:15.946 18:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:15.946 18:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:15.946 18:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:15.946 18:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:15.946 18:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:15.946 18:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:15.946 18:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:15.946 18:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:15.946 18:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63871 00:11:15.946 Process raid pid: 63871 00:11:15.946 18:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:15.946 18:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63871' 00:11:15.946 18:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63871 00:11:15.946 18:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63871 ']' 00:11:15.946 18:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.946 18:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:15.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.946 18:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.946 18:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:15.946 18:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.946 [2024-12-06 18:09:41.348469] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:11:15.946 [2024-12-06 18:09:41.348655] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:16.204 [2024-12-06 18:09:41.526243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.204 [2024-12-06 18:09:41.655656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.463 [2024-12-06 18:09:41.861507] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:16.463 [2024-12-06 18:09:41.861563] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:17.029 18:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:17.029 18:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:17.029 18:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:17.029 18:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.029 18:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.029 [2024-12-06 18:09:42.297945] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:17.029 [2024-12-06 18:09:42.298019] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:17.029 [2024-12-06 18:09:42.298036] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:17.029 [2024-12-06 18:09:42.298053] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:17.029 [2024-12-06 18:09:42.298063] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:17.029 [2024-12-06 18:09:42.298077] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:17.029 18:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.029 18:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:17.029 18:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.029 18:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.029 18:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:17.029 18:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.029 18:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:17.029 18:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.029 18:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.029 18:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.029 18:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.029 18:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.029 18:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.029 18:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.029 18:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.029 18:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.029 18:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.029 "name": "Existed_Raid", 00:11:17.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.029 "strip_size_kb": 64, 00:11:17.029 "state": "configuring", 00:11:17.029 "raid_level": "raid0", 00:11:17.029 "superblock": false, 00:11:17.029 "num_base_bdevs": 3, 00:11:17.029 "num_base_bdevs_discovered": 0, 00:11:17.029 "num_base_bdevs_operational": 3, 00:11:17.029 "base_bdevs_list": [ 00:11:17.029 { 00:11:17.029 "name": "BaseBdev1", 00:11:17.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.029 "is_configured": false, 00:11:17.029 "data_offset": 0, 00:11:17.029 "data_size": 0 00:11:17.029 }, 00:11:17.029 { 00:11:17.029 "name": "BaseBdev2", 00:11:17.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.029 "is_configured": false, 00:11:17.029 "data_offset": 0, 00:11:17.029 "data_size": 0 00:11:17.029 }, 00:11:17.029 { 00:11:17.029 "name": "BaseBdev3", 00:11:17.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.029 "is_configured": false, 00:11:17.029 "data_offset": 0, 00:11:17.029 "data_size": 0 00:11:17.029 } 00:11:17.029 ] 00:11:17.029 }' 00:11:17.029 18:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.029 18:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.597 18:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:17.597 18:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.597 18:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.597 [2024-12-06 18:09:42.822061] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:17.597 [2024-12-06 18:09:42.822257] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:17.597 18:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.597 18:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:17.597 18:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.597 18:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.597 [2024-12-06 18:09:42.834056] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:17.597 [2024-12-06 18:09:42.834236] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:17.597 [2024-12-06 18:09:42.834358] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:17.597 [2024-12-06 18:09:42.834421] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:17.597 [2024-12-06 18:09:42.834543] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:17.597 [2024-12-06 18:09:42.834604] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:17.597 18:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.597 18:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:17.597 18:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.597 18:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.597 [2024-12-06 18:09:42.878605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:17.597 BaseBdev1 00:11:17.597 18:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.597 18:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:17.597 18:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:17.597 18:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:17.597 18:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:17.597 18:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:17.597 18:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:17.597 18:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:17.597 18:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.597 18:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.597 18:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.597 18:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:17.597 18:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.597 18:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.597 [ 00:11:17.597 { 00:11:17.597 "name": "BaseBdev1", 00:11:17.597 "aliases": [ 00:11:17.597 "66a1998e-dab3-4a33-9792-7def944de968" 00:11:17.597 ], 00:11:17.597 "product_name": "Malloc disk", 00:11:17.597 "block_size": 512, 00:11:17.597 "num_blocks": 65536, 00:11:17.597 "uuid": "66a1998e-dab3-4a33-9792-7def944de968", 00:11:17.597 "assigned_rate_limits": { 00:11:17.597 "rw_ios_per_sec": 0, 00:11:17.597 "rw_mbytes_per_sec": 0, 00:11:17.597 "r_mbytes_per_sec": 0, 00:11:17.597 "w_mbytes_per_sec": 0 00:11:17.597 }, 00:11:17.597 "claimed": true, 00:11:17.597 "claim_type": "exclusive_write", 00:11:17.597 "zoned": false, 00:11:17.597 "supported_io_types": { 00:11:17.597 "read": true, 00:11:17.597 "write": true, 00:11:17.597 "unmap": true, 00:11:17.597 "flush": true, 00:11:17.597 "reset": true, 00:11:17.597 "nvme_admin": false, 00:11:17.597 "nvme_io": false, 00:11:17.597 "nvme_io_md": false, 00:11:17.597 "write_zeroes": true, 00:11:17.597 "zcopy": true, 00:11:17.597 "get_zone_info": false, 00:11:17.597 "zone_management": false, 00:11:17.597 "zone_append": false, 00:11:17.597 "compare": false, 00:11:17.597 "compare_and_write": false, 00:11:17.597 "abort": true, 00:11:17.597 "seek_hole": false, 00:11:17.597 "seek_data": false, 00:11:17.597 "copy": true, 00:11:17.597 "nvme_iov_md": false 00:11:17.597 }, 00:11:17.597 "memory_domains": [ 00:11:17.597 { 00:11:17.597 "dma_device_id": "system", 00:11:17.597 "dma_device_type": 1 00:11:17.597 }, 00:11:17.597 { 00:11:17.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.597 "dma_device_type": 2 00:11:17.597 } 00:11:17.597 ], 00:11:17.597 "driver_specific": {} 00:11:17.597 } 00:11:17.597 ] 00:11:17.597 18:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.597 18:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:17.597 18:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:17.597 18:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.597 18:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.597 18:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:17.598 18:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.598 18:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:17.598 18:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.598 18:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.598 18:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.598 18:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.598 18:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.598 18:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.598 18:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.598 18:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.598 18:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.598 18:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.598 "name": "Existed_Raid", 00:11:17.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.598 "strip_size_kb": 64, 00:11:17.598 "state": "configuring", 00:11:17.598 "raid_level": "raid0", 00:11:17.598 "superblock": false, 00:11:17.598 "num_base_bdevs": 3, 00:11:17.598 "num_base_bdevs_discovered": 1, 00:11:17.598 "num_base_bdevs_operational": 3, 00:11:17.598 "base_bdevs_list": [ 00:11:17.598 { 00:11:17.598 "name": "BaseBdev1", 00:11:17.598 "uuid": "66a1998e-dab3-4a33-9792-7def944de968", 00:11:17.598 "is_configured": true, 00:11:17.598 "data_offset": 0, 00:11:17.598 "data_size": 65536 00:11:17.598 }, 00:11:17.598 { 00:11:17.598 "name": "BaseBdev2", 00:11:17.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.598 "is_configured": false, 00:11:17.598 "data_offset": 0, 00:11:17.598 "data_size": 0 00:11:17.598 }, 00:11:17.598 { 00:11:17.598 "name": "BaseBdev3", 00:11:17.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.598 "is_configured": false, 00:11:17.598 "data_offset": 0, 00:11:17.598 "data_size": 0 00:11:17.598 } 00:11:17.598 ] 00:11:17.598 }' 00:11:17.598 18:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.598 18:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.164 18:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:18.164 18:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.164 18:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.164 [2024-12-06 18:09:43.434824] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:18.164 [2024-12-06 18:09:43.435746] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:18.164 18:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.164 18:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:18.164 18:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.164 18:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.164 [2024-12-06 18:09:43.446875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:18.164 [2024-12-06 18:09:43.449426] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:18.164 [2024-12-06 18:09:43.449620] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:18.164 [2024-12-06 18:09:43.449748] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:18.164 [2024-12-06 18:09:43.449843] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:18.164 18:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.164 18:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:18.164 18:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:18.164 18:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:18.164 18:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.164 18:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.164 18:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:18.164 18:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.164 18:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:18.164 18:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.164 18:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.164 18:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.164 18:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.164 18:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.164 18:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.164 18:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.164 18:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.164 18:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.164 18:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.164 "name": "Existed_Raid", 00:11:18.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.164 "strip_size_kb": 64, 00:11:18.164 "state": "configuring", 00:11:18.164 "raid_level": "raid0", 00:11:18.164 "superblock": false, 00:11:18.164 "num_base_bdevs": 3, 00:11:18.164 "num_base_bdevs_discovered": 1, 00:11:18.164 "num_base_bdevs_operational": 3, 00:11:18.164 "base_bdevs_list": [ 00:11:18.164 { 00:11:18.164 "name": "BaseBdev1", 00:11:18.164 "uuid": "66a1998e-dab3-4a33-9792-7def944de968", 00:11:18.164 "is_configured": true, 00:11:18.164 "data_offset": 0, 00:11:18.164 "data_size": 65536 00:11:18.164 }, 00:11:18.164 { 00:11:18.164 "name": "BaseBdev2", 00:11:18.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.164 "is_configured": false, 00:11:18.164 "data_offset": 0, 00:11:18.164 "data_size": 0 00:11:18.164 }, 00:11:18.164 { 00:11:18.164 "name": "BaseBdev3", 00:11:18.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.164 "is_configured": false, 00:11:18.164 "data_offset": 0, 00:11:18.164 "data_size": 0 00:11:18.164 } 00:11:18.164 ] 00:11:18.164 }' 00:11:18.164 18:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.164 18:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.732 18:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:18.732 18:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.732 18:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.732 [2024-12-06 18:09:44.007653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:18.732 BaseBdev2 00:11:18.732 18:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.732 18:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:18.732 18:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:18.732 18:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:18.732 18:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:18.732 18:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:18.732 18:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:18.732 18:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:18.732 18:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.732 18:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.732 18:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.732 18:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:18.732 18:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.732 18:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.732 [ 00:11:18.732 { 00:11:18.732 "name": "BaseBdev2", 00:11:18.732 "aliases": [ 00:11:18.732 "041c9d68-fb39-4363-be74-7bee5a51c556" 00:11:18.732 ], 00:11:18.732 "product_name": "Malloc disk", 00:11:18.732 "block_size": 512, 00:11:18.732 "num_blocks": 65536, 00:11:18.732 "uuid": "041c9d68-fb39-4363-be74-7bee5a51c556", 00:11:18.732 "assigned_rate_limits": { 00:11:18.732 "rw_ios_per_sec": 0, 00:11:18.732 "rw_mbytes_per_sec": 0, 00:11:18.732 "r_mbytes_per_sec": 0, 00:11:18.732 "w_mbytes_per_sec": 0 00:11:18.732 }, 00:11:18.732 "claimed": true, 00:11:18.732 "claim_type": "exclusive_write", 00:11:18.732 "zoned": false, 00:11:18.732 "supported_io_types": { 00:11:18.732 "read": true, 00:11:18.732 "write": true, 00:11:18.732 "unmap": true, 00:11:18.732 "flush": true, 00:11:18.732 "reset": true, 00:11:18.732 "nvme_admin": false, 00:11:18.732 "nvme_io": false, 00:11:18.732 "nvme_io_md": false, 00:11:18.732 "write_zeroes": true, 00:11:18.732 "zcopy": true, 00:11:18.732 "get_zone_info": false, 00:11:18.732 "zone_management": false, 00:11:18.732 "zone_append": false, 00:11:18.732 "compare": false, 00:11:18.732 "compare_and_write": false, 00:11:18.732 "abort": true, 00:11:18.732 "seek_hole": false, 00:11:18.732 "seek_data": false, 00:11:18.732 "copy": true, 00:11:18.732 "nvme_iov_md": false 00:11:18.732 }, 00:11:18.732 "memory_domains": [ 00:11:18.732 { 00:11:18.732 "dma_device_id": "system", 00:11:18.732 "dma_device_type": 1 00:11:18.732 }, 00:11:18.732 { 00:11:18.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.732 "dma_device_type": 2 00:11:18.732 } 00:11:18.732 ], 00:11:18.732 "driver_specific": {} 00:11:18.732 } 00:11:18.732 ] 00:11:18.732 18:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.732 18:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:18.732 18:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:18.732 18:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:18.732 18:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:18.732 18:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.732 18:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.732 18:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:18.732 18:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.732 18:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:18.732 18:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.732 18:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.732 18:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.732 18:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.732 18:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.732 18:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.732 18:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.732 18:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.732 18:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.732 18:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.732 "name": "Existed_Raid", 00:11:18.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.732 "strip_size_kb": 64, 00:11:18.732 "state": "configuring", 00:11:18.732 "raid_level": "raid0", 00:11:18.732 "superblock": false, 00:11:18.732 "num_base_bdevs": 3, 00:11:18.732 "num_base_bdevs_discovered": 2, 00:11:18.732 "num_base_bdevs_operational": 3, 00:11:18.732 "base_bdevs_list": [ 00:11:18.732 { 00:11:18.732 "name": "BaseBdev1", 00:11:18.732 "uuid": "66a1998e-dab3-4a33-9792-7def944de968", 00:11:18.732 "is_configured": true, 00:11:18.732 "data_offset": 0, 00:11:18.732 "data_size": 65536 00:11:18.732 }, 00:11:18.732 { 00:11:18.732 "name": "BaseBdev2", 00:11:18.732 "uuid": "041c9d68-fb39-4363-be74-7bee5a51c556", 00:11:18.732 "is_configured": true, 00:11:18.732 "data_offset": 0, 00:11:18.732 "data_size": 65536 00:11:18.732 }, 00:11:18.732 { 00:11:18.732 "name": "BaseBdev3", 00:11:18.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.732 "is_configured": false, 00:11:18.732 "data_offset": 0, 00:11:18.732 "data_size": 0 00:11:18.732 } 00:11:18.732 ] 00:11:18.732 }' 00:11:18.732 18:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.732 18:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.991 18:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:18.991 18:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.991 18:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.249 [2024-12-06 18:09:44.563673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:19.249 [2024-12-06 18:09:44.563736] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:19.249 [2024-12-06 18:09:44.563759] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:19.249 [2024-12-06 18:09:44.564269] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:19.249 [2024-12-06 18:09:44.564527] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:19.249 [2024-12-06 18:09:44.564553] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:19.249 [2024-12-06 18:09:44.565031] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:19.250 BaseBdev3 00:11:19.250 18:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.250 18:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:19.250 18:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:19.250 18:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:19.250 18:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:19.250 18:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:19.250 18:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:19.250 18:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:19.250 18:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.250 18:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.250 18:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.250 18:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:19.250 18:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.250 18:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.250 [ 00:11:19.250 { 00:11:19.250 "name": "BaseBdev3", 00:11:19.250 "aliases": [ 00:11:19.250 "71ff0a4d-6113-4dce-8eeb-f69a4647c477" 00:11:19.250 ], 00:11:19.250 "product_name": "Malloc disk", 00:11:19.250 "block_size": 512, 00:11:19.250 "num_blocks": 65536, 00:11:19.250 "uuid": "71ff0a4d-6113-4dce-8eeb-f69a4647c477", 00:11:19.250 "assigned_rate_limits": { 00:11:19.250 "rw_ios_per_sec": 0, 00:11:19.250 "rw_mbytes_per_sec": 0, 00:11:19.250 "r_mbytes_per_sec": 0, 00:11:19.250 "w_mbytes_per_sec": 0 00:11:19.250 }, 00:11:19.250 "claimed": true, 00:11:19.250 "claim_type": "exclusive_write", 00:11:19.250 "zoned": false, 00:11:19.250 "supported_io_types": { 00:11:19.250 "read": true, 00:11:19.250 "write": true, 00:11:19.250 "unmap": true, 00:11:19.250 "flush": true, 00:11:19.250 "reset": true, 00:11:19.250 "nvme_admin": false, 00:11:19.250 "nvme_io": false, 00:11:19.250 "nvme_io_md": false, 00:11:19.250 "write_zeroes": true, 00:11:19.250 "zcopy": true, 00:11:19.250 "get_zone_info": false, 00:11:19.250 "zone_management": false, 00:11:19.250 "zone_append": false, 00:11:19.250 "compare": false, 00:11:19.250 "compare_and_write": false, 00:11:19.250 "abort": true, 00:11:19.250 "seek_hole": false, 00:11:19.250 "seek_data": false, 00:11:19.250 "copy": true, 00:11:19.250 "nvme_iov_md": false 00:11:19.250 }, 00:11:19.250 "memory_domains": [ 00:11:19.250 { 00:11:19.250 "dma_device_id": "system", 00:11:19.250 "dma_device_type": 1 00:11:19.250 }, 00:11:19.250 { 00:11:19.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.250 "dma_device_type": 2 00:11:19.250 } 00:11:19.250 ], 00:11:19.250 "driver_specific": {} 00:11:19.250 } 00:11:19.250 ] 00:11:19.250 18:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.250 18:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:19.250 18:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:19.250 18:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:19.250 18:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:19.250 18:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.250 18:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:19.250 18:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:19.250 18:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.250 18:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:19.250 18:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.250 18:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.250 18:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.250 18:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.250 18:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.250 18:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.250 18:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.250 18:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.250 18:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.250 18:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.250 "name": "Existed_Raid", 00:11:19.250 "uuid": "01d92ec2-3763-4b25-9f8d-ae8ffac3c210", 00:11:19.250 "strip_size_kb": 64, 00:11:19.250 "state": "online", 00:11:19.250 "raid_level": "raid0", 00:11:19.250 "superblock": false, 00:11:19.250 "num_base_bdevs": 3, 00:11:19.250 "num_base_bdevs_discovered": 3, 00:11:19.250 "num_base_bdevs_operational": 3, 00:11:19.250 "base_bdevs_list": [ 00:11:19.250 { 00:11:19.250 "name": "BaseBdev1", 00:11:19.250 "uuid": "66a1998e-dab3-4a33-9792-7def944de968", 00:11:19.250 "is_configured": true, 00:11:19.250 "data_offset": 0, 00:11:19.250 "data_size": 65536 00:11:19.250 }, 00:11:19.250 { 00:11:19.250 "name": "BaseBdev2", 00:11:19.250 "uuid": "041c9d68-fb39-4363-be74-7bee5a51c556", 00:11:19.250 "is_configured": true, 00:11:19.250 "data_offset": 0, 00:11:19.250 "data_size": 65536 00:11:19.250 }, 00:11:19.250 { 00:11:19.250 "name": "BaseBdev3", 00:11:19.250 "uuid": "71ff0a4d-6113-4dce-8eeb-f69a4647c477", 00:11:19.250 "is_configured": true, 00:11:19.250 "data_offset": 0, 00:11:19.250 "data_size": 65536 00:11:19.250 } 00:11:19.250 ] 00:11:19.250 }' 00:11:19.250 18:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.250 18:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.816 18:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:19.816 18:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:19.816 18:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:19.816 18:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:19.816 18:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:19.816 18:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:19.816 18:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:19.816 18:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.816 18:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:19.816 18:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.816 [2024-12-06 18:09:45.156289] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:19.816 18:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.816 18:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:19.816 "name": "Existed_Raid", 00:11:19.816 "aliases": [ 00:11:19.816 "01d92ec2-3763-4b25-9f8d-ae8ffac3c210" 00:11:19.816 ], 00:11:19.816 "product_name": "Raid Volume", 00:11:19.816 "block_size": 512, 00:11:19.816 "num_blocks": 196608, 00:11:19.816 "uuid": "01d92ec2-3763-4b25-9f8d-ae8ffac3c210", 00:11:19.816 "assigned_rate_limits": { 00:11:19.816 "rw_ios_per_sec": 0, 00:11:19.816 "rw_mbytes_per_sec": 0, 00:11:19.816 "r_mbytes_per_sec": 0, 00:11:19.816 "w_mbytes_per_sec": 0 00:11:19.816 }, 00:11:19.816 "claimed": false, 00:11:19.816 "zoned": false, 00:11:19.816 "supported_io_types": { 00:11:19.816 "read": true, 00:11:19.816 "write": true, 00:11:19.816 "unmap": true, 00:11:19.816 "flush": true, 00:11:19.816 "reset": true, 00:11:19.816 "nvme_admin": false, 00:11:19.816 "nvme_io": false, 00:11:19.816 "nvme_io_md": false, 00:11:19.816 "write_zeroes": true, 00:11:19.816 "zcopy": false, 00:11:19.816 "get_zone_info": false, 00:11:19.816 "zone_management": false, 00:11:19.816 "zone_append": false, 00:11:19.816 "compare": false, 00:11:19.816 "compare_and_write": false, 00:11:19.816 "abort": false, 00:11:19.816 "seek_hole": false, 00:11:19.816 "seek_data": false, 00:11:19.816 "copy": false, 00:11:19.816 "nvme_iov_md": false 00:11:19.816 }, 00:11:19.816 "memory_domains": [ 00:11:19.816 { 00:11:19.816 "dma_device_id": "system", 00:11:19.816 "dma_device_type": 1 00:11:19.816 }, 00:11:19.816 { 00:11:19.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.816 "dma_device_type": 2 00:11:19.816 }, 00:11:19.816 { 00:11:19.816 "dma_device_id": "system", 00:11:19.816 "dma_device_type": 1 00:11:19.816 }, 00:11:19.816 { 00:11:19.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.816 "dma_device_type": 2 00:11:19.816 }, 00:11:19.816 { 00:11:19.816 "dma_device_id": "system", 00:11:19.816 "dma_device_type": 1 00:11:19.816 }, 00:11:19.816 { 00:11:19.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.816 "dma_device_type": 2 00:11:19.816 } 00:11:19.816 ], 00:11:19.816 "driver_specific": { 00:11:19.816 "raid": { 00:11:19.816 "uuid": "01d92ec2-3763-4b25-9f8d-ae8ffac3c210", 00:11:19.816 "strip_size_kb": 64, 00:11:19.816 "state": "online", 00:11:19.816 "raid_level": "raid0", 00:11:19.816 "superblock": false, 00:11:19.816 "num_base_bdevs": 3, 00:11:19.816 "num_base_bdevs_discovered": 3, 00:11:19.816 "num_base_bdevs_operational": 3, 00:11:19.816 "base_bdevs_list": [ 00:11:19.816 { 00:11:19.816 "name": "BaseBdev1", 00:11:19.816 "uuid": "66a1998e-dab3-4a33-9792-7def944de968", 00:11:19.816 "is_configured": true, 00:11:19.816 "data_offset": 0, 00:11:19.816 "data_size": 65536 00:11:19.816 }, 00:11:19.816 { 00:11:19.816 "name": "BaseBdev2", 00:11:19.816 "uuid": "041c9d68-fb39-4363-be74-7bee5a51c556", 00:11:19.816 "is_configured": true, 00:11:19.816 "data_offset": 0, 00:11:19.816 "data_size": 65536 00:11:19.816 }, 00:11:19.816 { 00:11:19.816 "name": "BaseBdev3", 00:11:19.816 "uuid": "71ff0a4d-6113-4dce-8eeb-f69a4647c477", 00:11:19.816 "is_configured": true, 00:11:19.816 "data_offset": 0, 00:11:19.816 "data_size": 65536 00:11:19.816 } 00:11:19.816 ] 00:11:19.816 } 00:11:19.816 } 00:11:19.816 }' 00:11:19.816 18:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:19.816 18:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:19.816 BaseBdev2 00:11:19.816 BaseBdev3' 00:11:19.816 18:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.816 18:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:19.816 18:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:19.816 18:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:19.816 18:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.816 18:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.816 18:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.816 18:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.074 18:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:20.074 18:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:20.074 18:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:20.074 18:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:20.074 18:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.074 18:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.074 18:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.074 18:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.074 18:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:20.074 18:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:20.074 18:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:20.074 18:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:20.074 18:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.074 18:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.074 18:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.074 18:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.074 18:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:20.074 18:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:20.074 18:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:20.074 18:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.074 18:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.074 [2024-12-06 18:09:45.480039] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:20.074 [2024-12-06 18:09:45.480075] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:20.074 [2024-12-06 18:09:45.480147] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:20.074 18:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.074 18:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:20.074 18:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:20.074 18:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:20.074 18:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:20.074 18:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:20.074 18:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:11:20.074 18:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.074 18:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:20.074 18:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:20.074 18:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.074 18:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:20.074 18:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.074 18:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.074 18:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.074 18:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.074 18:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.074 18:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.074 18:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.074 18:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.074 18:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.331 18:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.331 "name": "Existed_Raid", 00:11:20.331 "uuid": "01d92ec2-3763-4b25-9f8d-ae8ffac3c210", 00:11:20.331 "strip_size_kb": 64, 00:11:20.331 "state": "offline", 00:11:20.331 "raid_level": "raid0", 00:11:20.331 "superblock": false, 00:11:20.331 "num_base_bdevs": 3, 00:11:20.331 "num_base_bdevs_discovered": 2, 00:11:20.331 "num_base_bdevs_operational": 2, 00:11:20.331 "base_bdevs_list": [ 00:11:20.331 { 00:11:20.331 "name": null, 00:11:20.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.331 "is_configured": false, 00:11:20.331 "data_offset": 0, 00:11:20.331 "data_size": 65536 00:11:20.331 }, 00:11:20.331 { 00:11:20.331 "name": "BaseBdev2", 00:11:20.331 "uuid": "041c9d68-fb39-4363-be74-7bee5a51c556", 00:11:20.331 "is_configured": true, 00:11:20.331 "data_offset": 0, 00:11:20.331 "data_size": 65536 00:11:20.331 }, 00:11:20.331 { 00:11:20.331 "name": "BaseBdev3", 00:11:20.331 "uuid": "71ff0a4d-6113-4dce-8eeb-f69a4647c477", 00:11:20.331 "is_configured": true, 00:11:20.331 "data_offset": 0, 00:11:20.331 "data_size": 65536 00:11:20.331 } 00:11:20.331 ] 00:11:20.331 }' 00:11:20.331 18:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.331 18:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.897 18:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:20.897 18:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:20.897 18:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.897 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.897 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.897 18:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:20.897 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.897 18:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:20.897 18:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:20.897 18:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:20.897 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.897 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.897 [2024-12-06 18:09:46.187741] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:20.897 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.897 18:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:20.897 18:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:20.897 18:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.897 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.897 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.897 18:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:20.897 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.897 18:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:20.897 18:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:20.897 18:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:20.897 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.897 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.897 [2024-12-06 18:09:46.333168] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:20.897 [2024-12-06 18:09:46.333368] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:21.156 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.156 18:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:21.156 18:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:21.156 18:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.156 18:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:21.156 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.156 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.156 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.156 18:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:21.156 18:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:21.156 18:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:21.156 18:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:21.156 18:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:21.156 18:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:21.156 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.156 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.156 BaseBdev2 00:11:21.156 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.156 18:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:21.156 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:21.156 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:21.156 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:21.156 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:21.156 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:21.156 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:21.156 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.156 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.156 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.156 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:21.156 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.156 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.156 [ 00:11:21.156 { 00:11:21.156 "name": "BaseBdev2", 00:11:21.156 "aliases": [ 00:11:21.156 "4d4b5629-d915-46c0-99b7-05244912653f" 00:11:21.156 ], 00:11:21.156 "product_name": "Malloc disk", 00:11:21.156 "block_size": 512, 00:11:21.156 "num_blocks": 65536, 00:11:21.156 "uuid": "4d4b5629-d915-46c0-99b7-05244912653f", 00:11:21.156 "assigned_rate_limits": { 00:11:21.156 "rw_ios_per_sec": 0, 00:11:21.156 "rw_mbytes_per_sec": 0, 00:11:21.156 "r_mbytes_per_sec": 0, 00:11:21.156 "w_mbytes_per_sec": 0 00:11:21.156 }, 00:11:21.156 "claimed": false, 00:11:21.156 "zoned": false, 00:11:21.156 "supported_io_types": { 00:11:21.156 "read": true, 00:11:21.156 "write": true, 00:11:21.156 "unmap": true, 00:11:21.156 "flush": true, 00:11:21.156 "reset": true, 00:11:21.156 "nvme_admin": false, 00:11:21.156 "nvme_io": false, 00:11:21.156 "nvme_io_md": false, 00:11:21.156 "write_zeroes": true, 00:11:21.156 "zcopy": true, 00:11:21.156 "get_zone_info": false, 00:11:21.156 "zone_management": false, 00:11:21.156 "zone_append": false, 00:11:21.156 "compare": false, 00:11:21.156 "compare_and_write": false, 00:11:21.156 "abort": true, 00:11:21.156 "seek_hole": false, 00:11:21.156 "seek_data": false, 00:11:21.156 "copy": true, 00:11:21.156 "nvme_iov_md": false 00:11:21.157 }, 00:11:21.157 "memory_domains": [ 00:11:21.157 { 00:11:21.157 "dma_device_id": "system", 00:11:21.157 "dma_device_type": 1 00:11:21.157 }, 00:11:21.157 { 00:11:21.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.157 "dma_device_type": 2 00:11:21.157 } 00:11:21.157 ], 00:11:21.157 "driver_specific": {} 00:11:21.157 } 00:11:21.157 ] 00:11:21.157 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.157 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:21.157 18:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:21.157 18:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:21.157 18:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:21.157 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.157 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.157 BaseBdev3 00:11:21.157 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.157 18:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:21.157 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:21.157 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:21.157 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:21.157 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:21.157 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:21.157 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:21.157 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.157 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.157 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.157 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:21.157 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.157 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.157 [ 00:11:21.157 { 00:11:21.157 "name": "BaseBdev3", 00:11:21.157 "aliases": [ 00:11:21.157 "9afce3bf-e13d-4cc0-8eec-19fa4b3a3792" 00:11:21.157 ], 00:11:21.157 "product_name": "Malloc disk", 00:11:21.157 "block_size": 512, 00:11:21.157 "num_blocks": 65536, 00:11:21.157 "uuid": "9afce3bf-e13d-4cc0-8eec-19fa4b3a3792", 00:11:21.157 "assigned_rate_limits": { 00:11:21.157 "rw_ios_per_sec": 0, 00:11:21.157 "rw_mbytes_per_sec": 0, 00:11:21.157 "r_mbytes_per_sec": 0, 00:11:21.157 "w_mbytes_per_sec": 0 00:11:21.157 }, 00:11:21.157 "claimed": false, 00:11:21.157 "zoned": false, 00:11:21.157 "supported_io_types": { 00:11:21.157 "read": true, 00:11:21.157 "write": true, 00:11:21.157 "unmap": true, 00:11:21.157 "flush": true, 00:11:21.157 "reset": true, 00:11:21.157 "nvme_admin": false, 00:11:21.157 "nvme_io": false, 00:11:21.157 "nvme_io_md": false, 00:11:21.157 "write_zeroes": true, 00:11:21.157 "zcopy": true, 00:11:21.157 "get_zone_info": false, 00:11:21.157 "zone_management": false, 00:11:21.157 "zone_append": false, 00:11:21.157 "compare": false, 00:11:21.157 "compare_and_write": false, 00:11:21.157 "abort": true, 00:11:21.157 "seek_hole": false, 00:11:21.157 "seek_data": false, 00:11:21.157 "copy": true, 00:11:21.157 "nvme_iov_md": false 00:11:21.157 }, 00:11:21.157 "memory_domains": [ 00:11:21.157 { 00:11:21.157 "dma_device_id": "system", 00:11:21.157 "dma_device_type": 1 00:11:21.157 }, 00:11:21.157 { 00:11:21.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.157 "dma_device_type": 2 00:11:21.157 } 00:11:21.157 ], 00:11:21.157 "driver_specific": {} 00:11:21.157 } 00:11:21.157 ] 00:11:21.157 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.157 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:21.157 18:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:21.157 18:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:21.157 18:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:21.157 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.157 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.157 [2024-12-06 18:09:46.629952] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:21.157 [2024-12-06 18:09:46.630148] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:21.157 [2024-12-06 18:09:46.630288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:21.157 [2024-12-06 18:09:46.632760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:21.157 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.157 18:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:21.157 18:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.157 18:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.157 18:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:21.157 18:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.157 18:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:21.157 18:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.157 18:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.157 18:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.157 18:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.157 18:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.157 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.157 18:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.157 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.157 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.415 18:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.415 "name": "Existed_Raid", 00:11:21.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.415 "strip_size_kb": 64, 00:11:21.415 "state": "configuring", 00:11:21.415 "raid_level": "raid0", 00:11:21.415 "superblock": false, 00:11:21.415 "num_base_bdevs": 3, 00:11:21.415 "num_base_bdevs_discovered": 2, 00:11:21.415 "num_base_bdevs_operational": 3, 00:11:21.415 "base_bdevs_list": [ 00:11:21.415 { 00:11:21.415 "name": "BaseBdev1", 00:11:21.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.415 "is_configured": false, 00:11:21.415 "data_offset": 0, 00:11:21.415 "data_size": 0 00:11:21.415 }, 00:11:21.415 { 00:11:21.415 "name": "BaseBdev2", 00:11:21.415 "uuid": "4d4b5629-d915-46c0-99b7-05244912653f", 00:11:21.415 "is_configured": true, 00:11:21.415 "data_offset": 0, 00:11:21.415 "data_size": 65536 00:11:21.415 }, 00:11:21.415 { 00:11:21.415 "name": "BaseBdev3", 00:11:21.415 "uuid": "9afce3bf-e13d-4cc0-8eec-19fa4b3a3792", 00:11:21.415 "is_configured": true, 00:11:21.415 "data_offset": 0, 00:11:21.415 "data_size": 65536 00:11:21.415 } 00:11:21.415 ] 00:11:21.415 }' 00:11:21.415 18:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.415 18:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.678 18:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:21.678 18:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.678 18:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.678 [2024-12-06 18:09:47.130125] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:21.678 18:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.678 18:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:21.678 18:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.678 18:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.678 18:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:21.678 18:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.678 18:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:21.678 18:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.678 18:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.678 18:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.678 18:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.678 18:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.678 18:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.678 18:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.678 18:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.678 18:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.678 18:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.678 "name": "Existed_Raid", 00:11:21.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.678 "strip_size_kb": 64, 00:11:21.678 "state": "configuring", 00:11:21.678 "raid_level": "raid0", 00:11:21.678 "superblock": false, 00:11:21.678 "num_base_bdevs": 3, 00:11:21.678 "num_base_bdevs_discovered": 1, 00:11:21.678 "num_base_bdevs_operational": 3, 00:11:21.678 "base_bdevs_list": [ 00:11:21.678 { 00:11:21.678 "name": "BaseBdev1", 00:11:21.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.678 "is_configured": false, 00:11:21.678 "data_offset": 0, 00:11:21.678 "data_size": 0 00:11:21.678 }, 00:11:21.678 { 00:11:21.678 "name": null, 00:11:21.678 "uuid": "4d4b5629-d915-46c0-99b7-05244912653f", 00:11:21.678 "is_configured": false, 00:11:21.678 "data_offset": 0, 00:11:21.678 "data_size": 65536 00:11:21.678 }, 00:11:21.678 { 00:11:21.678 "name": "BaseBdev3", 00:11:21.678 "uuid": "9afce3bf-e13d-4cc0-8eec-19fa4b3a3792", 00:11:21.678 "is_configured": true, 00:11:21.678 "data_offset": 0, 00:11:21.678 "data_size": 65536 00:11:21.678 } 00:11:21.678 ] 00:11:21.678 }' 00:11:21.678 18:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.678 18:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.255 18:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:22.255 18:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.255 18:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.255 18:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.255 18:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.255 18:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:22.255 18:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:22.255 18:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.255 18:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.255 [2024-12-06 18:09:47.731898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:22.255 BaseBdev1 00:11:22.255 18:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.255 18:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:22.255 18:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:22.255 18:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:22.255 18:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:22.255 18:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:22.255 18:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:22.255 18:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:22.255 18:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.255 18:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.255 18:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.255 18:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:22.255 18:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.255 18:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.255 [ 00:11:22.255 { 00:11:22.255 "name": "BaseBdev1", 00:11:22.255 "aliases": [ 00:11:22.255 "61b25b05-cb10-49c7-9d99-534bbb040b63" 00:11:22.255 ], 00:11:22.255 "product_name": "Malloc disk", 00:11:22.255 "block_size": 512, 00:11:22.255 "num_blocks": 65536, 00:11:22.255 "uuid": "61b25b05-cb10-49c7-9d99-534bbb040b63", 00:11:22.255 "assigned_rate_limits": { 00:11:22.255 "rw_ios_per_sec": 0, 00:11:22.255 "rw_mbytes_per_sec": 0, 00:11:22.255 "r_mbytes_per_sec": 0, 00:11:22.255 "w_mbytes_per_sec": 0 00:11:22.255 }, 00:11:22.255 "claimed": true, 00:11:22.255 "claim_type": "exclusive_write", 00:11:22.255 "zoned": false, 00:11:22.255 "supported_io_types": { 00:11:22.255 "read": true, 00:11:22.255 "write": true, 00:11:22.255 "unmap": true, 00:11:22.255 "flush": true, 00:11:22.255 "reset": true, 00:11:22.255 "nvme_admin": false, 00:11:22.255 "nvme_io": false, 00:11:22.255 "nvme_io_md": false, 00:11:22.255 "write_zeroes": true, 00:11:22.255 "zcopy": true, 00:11:22.255 "get_zone_info": false, 00:11:22.255 "zone_management": false, 00:11:22.255 "zone_append": false, 00:11:22.255 "compare": false, 00:11:22.255 "compare_and_write": false, 00:11:22.255 "abort": true, 00:11:22.255 "seek_hole": false, 00:11:22.255 "seek_data": false, 00:11:22.255 "copy": true, 00:11:22.255 "nvme_iov_md": false 00:11:22.255 }, 00:11:22.255 "memory_domains": [ 00:11:22.255 { 00:11:22.255 "dma_device_id": "system", 00:11:22.255 "dma_device_type": 1 00:11:22.255 }, 00:11:22.255 { 00:11:22.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.255 "dma_device_type": 2 00:11:22.255 } 00:11:22.255 ], 00:11:22.255 "driver_specific": {} 00:11:22.255 } 00:11:22.255 ] 00:11:22.255 18:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.255 18:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:22.255 18:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:22.255 18:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.255 18:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.255 18:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:22.255 18:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.255 18:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:22.255 18:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.255 18:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.255 18:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.256 18:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.256 18:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.256 18:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.256 18:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.256 18:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.514 18:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.514 18:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.514 "name": "Existed_Raid", 00:11:22.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.514 "strip_size_kb": 64, 00:11:22.514 "state": "configuring", 00:11:22.514 "raid_level": "raid0", 00:11:22.514 "superblock": false, 00:11:22.514 "num_base_bdevs": 3, 00:11:22.514 "num_base_bdevs_discovered": 2, 00:11:22.514 "num_base_bdevs_operational": 3, 00:11:22.514 "base_bdevs_list": [ 00:11:22.514 { 00:11:22.514 "name": "BaseBdev1", 00:11:22.514 "uuid": "61b25b05-cb10-49c7-9d99-534bbb040b63", 00:11:22.514 "is_configured": true, 00:11:22.514 "data_offset": 0, 00:11:22.514 "data_size": 65536 00:11:22.514 }, 00:11:22.514 { 00:11:22.514 "name": null, 00:11:22.514 "uuid": "4d4b5629-d915-46c0-99b7-05244912653f", 00:11:22.514 "is_configured": false, 00:11:22.514 "data_offset": 0, 00:11:22.514 "data_size": 65536 00:11:22.514 }, 00:11:22.514 { 00:11:22.514 "name": "BaseBdev3", 00:11:22.514 "uuid": "9afce3bf-e13d-4cc0-8eec-19fa4b3a3792", 00:11:22.514 "is_configured": true, 00:11:22.514 "data_offset": 0, 00:11:22.514 "data_size": 65536 00:11:22.514 } 00:11:22.514 ] 00:11:22.514 }' 00:11:22.514 18:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.514 18:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.773 18:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:22.773 18:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.773 18:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.773 18:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.773 18:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.031 18:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:23.031 18:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:23.031 18:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.031 18:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.031 [2024-12-06 18:09:48.324112] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:23.031 18:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.031 18:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:23.031 18:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.031 18:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.031 18:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:23.031 18:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.031 18:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:23.031 18:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.031 18:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.031 18:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.031 18:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.031 18:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.031 18:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.031 18:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.031 18:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.031 18:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.031 18:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.031 "name": "Existed_Raid", 00:11:23.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.031 "strip_size_kb": 64, 00:11:23.031 "state": "configuring", 00:11:23.031 "raid_level": "raid0", 00:11:23.031 "superblock": false, 00:11:23.031 "num_base_bdevs": 3, 00:11:23.031 "num_base_bdevs_discovered": 1, 00:11:23.031 "num_base_bdevs_operational": 3, 00:11:23.031 "base_bdevs_list": [ 00:11:23.031 { 00:11:23.031 "name": "BaseBdev1", 00:11:23.031 "uuid": "61b25b05-cb10-49c7-9d99-534bbb040b63", 00:11:23.031 "is_configured": true, 00:11:23.031 "data_offset": 0, 00:11:23.031 "data_size": 65536 00:11:23.031 }, 00:11:23.031 { 00:11:23.031 "name": null, 00:11:23.031 "uuid": "4d4b5629-d915-46c0-99b7-05244912653f", 00:11:23.031 "is_configured": false, 00:11:23.031 "data_offset": 0, 00:11:23.031 "data_size": 65536 00:11:23.031 }, 00:11:23.031 { 00:11:23.031 "name": null, 00:11:23.031 "uuid": "9afce3bf-e13d-4cc0-8eec-19fa4b3a3792", 00:11:23.031 "is_configured": false, 00:11:23.031 "data_offset": 0, 00:11:23.031 "data_size": 65536 00:11:23.031 } 00:11:23.031 ] 00:11:23.031 }' 00:11:23.032 18:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.032 18:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.596 18:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.596 18:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:23.596 18:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.596 18:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.596 18:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.596 18:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:23.596 18:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:23.596 18:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.596 18:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.596 [2024-12-06 18:09:48.900317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:23.596 18:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.596 18:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:23.596 18:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.596 18:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.596 18:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:23.596 18:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.596 18:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:23.596 18:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.596 18:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.596 18:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.596 18:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.596 18:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.596 18:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.596 18:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.596 18:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.596 18:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.596 18:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.596 "name": "Existed_Raid", 00:11:23.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.596 "strip_size_kb": 64, 00:11:23.596 "state": "configuring", 00:11:23.596 "raid_level": "raid0", 00:11:23.596 "superblock": false, 00:11:23.596 "num_base_bdevs": 3, 00:11:23.596 "num_base_bdevs_discovered": 2, 00:11:23.596 "num_base_bdevs_operational": 3, 00:11:23.596 "base_bdevs_list": [ 00:11:23.596 { 00:11:23.596 "name": "BaseBdev1", 00:11:23.596 "uuid": "61b25b05-cb10-49c7-9d99-534bbb040b63", 00:11:23.596 "is_configured": true, 00:11:23.596 "data_offset": 0, 00:11:23.596 "data_size": 65536 00:11:23.596 }, 00:11:23.596 { 00:11:23.596 "name": null, 00:11:23.596 "uuid": "4d4b5629-d915-46c0-99b7-05244912653f", 00:11:23.596 "is_configured": false, 00:11:23.597 "data_offset": 0, 00:11:23.597 "data_size": 65536 00:11:23.597 }, 00:11:23.597 { 00:11:23.597 "name": "BaseBdev3", 00:11:23.597 "uuid": "9afce3bf-e13d-4cc0-8eec-19fa4b3a3792", 00:11:23.597 "is_configured": true, 00:11:23.597 "data_offset": 0, 00:11:23.597 "data_size": 65536 00:11:23.597 } 00:11:23.597 ] 00:11:23.597 }' 00:11:23.597 18:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.597 18:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.163 18:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.163 18:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:24.163 18:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.163 18:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.163 18:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.163 18:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:24.163 18:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:24.163 18:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.163 18:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.163 [2024-12-06 18:09:49.508485] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:24.163 18:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.163 18:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:24.163 18:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.163 18:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.163 18:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:24.163 18:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.163 18:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:24.163 18:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.163 18:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.163 18:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.163 18:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.163 18:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.163 18:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.163 18:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.163 18:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.163 18:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.163 18:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.163 "name": "Existed_Raid", 00:11:24.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.163 "strip_size_kb": 64, 00:11:24.163 "state": "configuring", 00:11:24.163 "raid_level": "raid0", 00:11:24.163 "superblock": false, 00:11:24.163 "num_base_bdevs": 3, 00:11:24.163 "num_base_bdevs_discovered": 1, 00:11:24.163 "num_base_bdevs_operational": 3, 00:11:24.163 "base_bdevs_list": [ 00:11:24.163 { 00:11:24.163 "name": null, 00:11:24.163 "uuid": "61b25b05-cb10-49c7-9d99-534bbb040b63", 00:11:24.163 "is_configured": false, 00:11:24.163 "data_offset": 0, 00:11:24.163 "data_size": 65536 00:11:24.163 }, 00:11:24.163 { 00:11:24.163 "name": null, 00:11:24.163 "uuid": "4d4b5629-d915-46c0-99b7-05244912653f", 00:11:24.163 "is_configured": false, 00:11:24.163 "data_offset": 0, 00:11:24.163 "data_size": 65536 00:11:24.163 }, 00:11:24.163 { 00:11:24.163 "name": "BaseBdev3", 00:11:24.163 "uuid": "9afce3bf-e13d-4cc0-8eec-19fa4b3a3792", 00:11:24.163 "is_configured": true, 00:11:24.163 "data_offset": 0, 00:11:24.163 "data_size": 65536 00:11:24.163 } 00:11:24.163 ] 00:11:24.163 }' 00:11:24.163 18:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.163 18:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.729 18:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.729 18:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.729 18:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:24.729 18:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.729 18:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.729 18:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:24.729 18:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:24.729 18:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.729 18:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.729 [2024-12-06 18:09:50.161301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:24.729 18:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.729 18:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:24.729 18:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.729 18:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.730 18:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:24.730 18:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.730 18:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:24.730 18:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.730 18:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.730 18:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.730 18:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.730 18:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.730 18:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.730 18:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.730 18:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.730 18:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.730 18:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.730 "name": "Existed_Raid", 00:11:24.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.730 "strip_size_kb": 64, 00:11:24.730 "state": "configuring", 00:11:24.730 "raid_level": "raid0", 00:11:24.730 "superblock": false, 00:11:24.730 "num_base_bdevs": 3, 00:11:24.730 "num_base_bdevs_discovered": 2, 00:11:24.730 "num_base_bdevs_operational": 3, 00:11:24.730 "base_bdevs_list": [ 00:11:24.730 { 00:11:24.730 "name": null, 00:11:24.730 "uuid": "61b25b05-cb10-49c7-9d99-534bbb040b63", 00:11:24.730 "is_configured": false, 00:11:24.730 "data_offset": 0, 00:11:24.730 "data_size": 65536 00:11:24.730 }, 00:11:24.730 { 00:11:24.730 "name": "BaseBdev2", 00:11:24.730 "uuid": "4d4b5629-d915-46c0-99b7-05244912653f", 00:11:24.730 "is_configured": true, 00:11:24.730 "data_offset": 0, 00:11:24.730 "data_size": 65536 00:11:24.730 }, 00:11:24.730 { 00:11:24.730 "name": "BaseBdev3", 00:11:24.730 "uuid": "9afce3bf-e13d-4cc0-8eec-19fa4b3a3792", 00:11:24.730 "is_configured": true, 00:11:24.730 "data_offset": 0, 00:11:24.730 "data_size": 65536 00:11:24.730 } 00:11:24.730 ] 00:11:24.730 }' 00:11:24.730 18:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.730 18:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.297 18:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.297 18:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:25.297 18:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.297 18:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.297 18:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.297 18:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:25.297 18:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.297 18:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.297 18:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:25.297 18:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.297 18:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.297 18:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 61b25b05-cb10-49c7-9d99-534bbb040b63 00:11:25.297 18:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.297 18:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.297 [2024-12-06 18:09:50.799743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:25.297 [2024-12-06 18:09:50.799821] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:25.297 [2024-12-06 18:09:50.799839] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:25.297 [2024-12-06 18:09:50.800163] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:25.297 [2024-12-06 18:09:50.800364] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:25.297 [2024-12-06 18:09:50.800381] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:25.297 [2024-12-06 18:09:50.800679] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:25.297 NewBaseBdev 00:11:25.297 18:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.297 18:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:25.297 18:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:25.297 18:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:25.297 18:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:25.297 18:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:25.297 18:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:25.297 18:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:25.297 18:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.298 18:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.298 18:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.298 18:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:25.298 18:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.298 18:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.558 [ 00:11:25.558 { 00:11:25.558 "name": "NewBaseBdev", 00:11:25.558 "aliases": [ 00:11:25.558 "61b25b05-cb10-49c7-9d99-534bbb040b63" 00:11:25.558 ], 00:11:25.558 "product_name": "Malloc disk", 00:11:25.558 "block_size": 512, 00:11:25.558 "num_blocks": 65536, 00:11:25.558 "uuid": "61b25b05-cb10-49c7-9d99-534bbb040b63", 00:11:25.558 "assigned_rate_limits": { 00:11:25.558 "rw_ios_per_sec": 0, 00:11:25.558 "rw_mbytes_per_sec": 0, 00:11:25.558 "r_mbytes_per_sec": 0, 00:11:25.558 "w_mbytes_per_sec": 0 00:11:25.558 }, 00:11:25.558 "claimed": true, 00:11:25.558 "claim_type": "exclusive_write", 00:11:25.558 "zoned": false, 00:11:25.558 "supported_io_types": { 00:11:25.558 "read": true, 00:11:25.558 "write": true, 00:11:25.558 "unmap": true, 00:11:25.558 "flush": true, 00:11:25.558 "reset": true, 00:11:25.558 "nvme_admin": false, 00:11:25.558 "nvme_io": false, 00:11:25.558 "nvme_io_md": false, 00:11:25.558 "write_zeroes": true, 00:11:25.558 "zcopy": true, 00:11:25.558 "get_zone_info": false, 00:11:25.558 "zone_management": false, 00:11:25.558 "zone_append": false, 00:11:25.558 "compare": false, 00:11:25.558 "compare_and_write": false, 00:11:25.558 "abort": true, 00:11:25.558 "seek_hole": false, 00:11:25.558 "seek_data": false, 00:11:25.558 "copy": true, 00:11:25.558 "nvme_iov_md": false 00:11:25.558 }, 00:11:25.558 "memory_domains": [ 00:11:25.558 { 00:11:25.558 "dma_device_id": "system", 00:11:25.558 "dma_device_type": 1 00:11:25.558 }, 00:11:25.558 { 00:11:25.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.558 "dma_device_type": 2 00:11:25.558 } 00:11:25.558 ], 00:11:25.558 "driver_specific": {} 00:11:25.558 } 00:11:25.558 ] 00:11:25.558 18:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.558 18:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:25.558 18:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:25.558 18:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.558 18:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:25.558 18:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:25.558 18:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.558 18:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:25.558 18:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.558 18:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.558 18:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.558 18:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.558 18:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.558 18:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.558 18:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.558 18:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.558 18:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.558 18:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.558 "name": "Existed_Raid", 00:11:25.558 "uuid": "c37a950d-c62b-46ba-8e14-0478309348e8", 00:11:25.558 "strip_size_kb": 64, 00:11:25.558 "state": "online", 00:11:25.558 "raid_level": "raid0", 00:11:25.558 "superblock": false, 00:11:25.558 "num_base_bdevs": 3, 00:11:25.558 "num_base_bdevs_discovered": 3, 00:11:25.558 "num_base_bdevs_operational": 3, 00:11:25.558 "base_bdevs_list": [ 00:11:25.558 { 00:11:25.558 "name": "NewBaseBdev", 00:11:25.558 "uuid": "61b25b05-cb10-49c7-9d99-534bbb040b63", 00:11:25.558 "is_configured": true, 00:11:25.558 "data_offset": 0, 00:11:25.558 "data_size": 65536 00:11:25.558 }, 00:11:25.558 { 00:11:25.558 "name": "BaseBdev2", 00:11:25.558 "uuid": "4d4b5629-d915-46c0-99b7-05244912653f", 00:11:25.558 "is_configured": true, 00:11:25.558 "data_offset": 0, 00:11:25.558 "data_size": 65536 00:11:25.558 }, 00:11:25.558 { 00:11:25.558 "name": "BaseBdev3", 00:11:25.558 "uuid": "9afce3bf-e13d-4cc0-8eec-19fa4b3a3792", 00:11:25.558 "is_configured": true, 00:11:25.558 "data_offset": 0, 00:11:25.558 "data_size": 65536 00:11:25.558 } 00:11:25.558 ] 00:11:25.558 }' 00:11:25.558 18:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.558 18:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.127 18:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:26.127 18:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:26.127 18:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:26.127 18:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:26.127 18:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:26.127 18:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:26.127 18:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:26.127 18:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:26.127 18:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.127 18:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.127 [2024-12-06 18:09:51.348322] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:26.127 18:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.127 18:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:26.127 "name": "Existed_Raid", 00:11:26.127 "aliases": [ 00:11:26.127 "c37a950d-c62b-46ba-8e14-0478309348e8" 00:11:26.127 ], 00:11:26.127 "product_name": "Raid Volume", 00:11:26.127 "block_size": 512, 00:11:26.127 "num_blocks": 196608, 00:11:26.127 "uuid": "c37a950d-c62b-46ba-8e14-0478309348e8", 00:11:26.127 "assigned_rate_limits": { 00:11:26.127 "rw_ios_per_sec": 0, 00:11:26.127 "rw_mbytes_per_sec": 0, 00:11:26.127 "r_mbytes_per_sec": 0, 00:11:26.127 "w_mbytes_per_sec": 0 00:11:26.127 }, 00:11:26.127 "claimed": false, 00:11:26.127 "zoned": false, 00:11:26.127 "supported_io_types": { 00:11:26.127 "read": true, 00:11:26.127 "write": true, 00:11:26.127 "unmap": true, 00:11:26.127 "flush": true, 00:11:26.127 "reset": true, 00:11:26.127 "nvme_admin": false, 00:11:26.127 "nvme_io": false, 00:11:26.127 "nvme_io_md": false, 00:11:26.127 "write_zeroes": true, 00:11:26.127 "zcopy": false, 00:11:26.127 "get_zone_info": false, 00:11:26.127 "zone_management": false, 00:11:26.127 "zone_append": false, 00:11:26.127 "compare": false, 00:11:26.127 "compare_and_write": false, 00:11:26.127 "abort": false, 00:11:26.127 "seek_hole": false, 00:11:26.127 "seek_data": false, 00:11:26.127 "copy": false, 00:11:26.127 "nvme_iov_md": false 00:11:26.127 }, 00:11:26.127 "memory_domains": [ 00:11:26.127 { 00:11:26.127 "dma_device_id": "system", 00:11:26.127 "dma_device_type": 1 00:11:26.127 }, 00:11:26.127 { 00:11:26.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.127 "dma_device_type": 2 00:11:26.127 }, 00:11:26.127 { 00:11:26.127 "dma_device_id": "system", 00:11:26.127 "dma_device_type": 1 00:11:26.127 }, 00:11:26.127 { 00:11:26.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.127 "dma_device_type": 2 00:11:26.127 }, 00:11:26.127 { 00:11:26.127 "dma_device_id": "system", 00:11:26.127 "dma_device_type": 1 00:11:26.127 }, 00:11:26.127 { 00:11:26.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.127 "dma_device_type": 2 00:11:26.127 } 00:11:26.127 ], 00:11:26.127 "driver_specific": { 00:11:26.127 "raid": { 00:11:26.127 "uuid": "c37a950d-c62b-46ba-8e14-0478309348e8", 00:11:26.127 "strip_size_kb": 64, 00:11:26.127 "state": "online", 00:11:26.127 "raid_level": "raid0", 00:11:26.127 "superblock": false, 00:11:26.127 "num_base_bdevs": 3, 00:11:26.127 "num_base_bdevs_discovered": 3, 00:11:26.127 "num_base_bdevs_operational": 3, 00:11:26.127 "base_bdevs_list": [ 00:11:26.127 { 00:11:26.127 "name": "NewBaseBdev", 00:11:26.127 "uuid": "61b25b05-cb10-49c7-9d99-534bbb040b63", 00:11:26.127 "is_configured": true, 00:11:26.127 "data_offset": 0, 00:11:26.127 "data_size": 65536 00:11:26.127 }, 00:11:26.127 { 00:11:26.127 "name": "BaseBdev2", 00:11:26.127 "uuid": "4d4b5629-d915-46c0-99b7-05244912653f", 00:11:26.127 "is_configured": true, 00:11:26.127 "data_offset": 0, 00:11:26.127 "data_size": 65536 00:11:26.127 }, 00:11:26.127 { 00:11:26.127 "name": "BaseBdev3", 00:11:26.127 "uuid": "9afce3bf-e13d-4cc0-8eec-19fa4b3a3792", 00:11:26.127 "is_configured": true, 00:11:26.128 "data_offset": 0, 00:11:26.128 "data_size": 65536 00:11:26.128 } 00:11:26.128 ] 00:11:26.128 } 00:11:26.128 } 00:11:26.128 }' 00:11:26.128 18:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:26.128 18:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:26.128 BaseBdev2 00:11:26.128 BaseBdev3' 00:11:26.128 18:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.128 18:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:26.128 18:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.128 18:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:26.128 18:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.128 18:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.128 18:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.128 18:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.128 18:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.128 18:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.128 18:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.128 18:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.128 18:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:26.128 18:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.128 18:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.128 18:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.128 18:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.128 18:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.128 18:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.128 18:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:26.128 18:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.128 18:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.128 18:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.128 18:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.386 18:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.386 18:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.386 18:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:26.386 18:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.386 18:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.387 [2024-12-06 18:09:51.668024] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:26.387 [2024-12-06 18:09:51.668059] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:26.387 [2024-12-06 18:09:51.668161] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:26.387 [2024-12-06 18:09:51.668234] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:26.387 [2024-12-06 18:09:51.668255] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:26.387 18:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.387 18:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63871 00:11:26.387 18:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63871 ']' 00:11:26.387 18:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63871 00:11:26.387 18:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:26.387 18:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:26.387 18:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63871 00:11:26.387 killing process with pid 63871 00:11:26.387 18:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:26.387 18:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:26.387 18:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63871' 00:11:26.387 18:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63871 00:11:26.387 [2024-12-06 18:09:51.709118] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:26.387 18:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63871 00:11:26.646 [2024-12-06 18:09:51.976858] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:27.583 18:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:27.583 00:11:27.583 real 0m11.806s 00:11:27.583 user 0m19.660s 00:11:27.583 sys 0m1.523s 00:11:27.583 18:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:27.583 ************************************ 00:11:27.583 END TEST raid_state_function_test 00:11:27.583 ************************************ 00:11:27.583 18:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.583 18:09:53 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:11:27.583 18:09:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:27.583 18:09:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:27.583 18:09:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:27.842 ************************************ 00:11:27.842 START TEST raid_state_function_test_sb 00:11:27.842 ************************************ 00:11:27.842 18:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:11:27.842 18:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:27.842 18:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:27.842 18:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:27.842 18:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:27.842 18:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:27.843 18:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:27.843 18:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:27.843 18:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:27.843 18:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:27.843 18:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:27.843 18:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:27.843 18:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:27.843 18:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:27.843 18:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:27.843 18:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:27.843 18:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:27.843 18:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:27.843 18:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:27.843 18:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:27.843 18:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:27.843 18:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:27.843 18:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:27.843 18:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:27.843 18:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:27.843 Process raid pid: 64505 00:11:27.843 18:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:27.843 18:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:27.843 18:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64505 00:11:27.843 18:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:27.843 18:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64505' 00:11:27.843 18:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64505 00:11:27.843 18:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64505 ']' 00:11:27.843 18:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.843 18:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:27.843 18:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.843 18:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:27.843 18:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.843 [2024-12-06 18:09:53.228659] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:11:27.843 [2024-12-06 18:09:53.229108] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:28.102 [2024-12-06 18:09:53.422497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.102 [2024-12-06 18:09:53.585177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.360 [2024-12-06 18:09:53.813582] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:28.360 [2024-12-06 18:09:53.813631] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:28.928 18:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:28.928 18:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:28.928 18:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:28.928 18:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.928 18:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.928 [2024-12-06 18:09:54.255380] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:28.928 [2024-12-06 18:09:54.255460] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:28.928 [2024-12-06 18:09:54.255482] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:28.928 [2024-12-06 18:09:54.255503] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:28.928 [2024-12-06 18:09:54.255516] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:28.928 [2024-12-06 18:09:54.255533] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:28.928 18:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.928 18:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:28.928 18:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.928 18:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.928 18:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:28.928 18:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.928 18:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:28.928 18:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.928 18:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.928 18:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.928 18:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.928 18:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.928 18:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.928 18:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.928 18:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.928 18:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.928 18:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.928 "name": "Existed_Raid", 00:11:28.928 "uuid": "078392f8-01b5-405e-8cb8-7edabc6cdd36", 00:11:28.928 "strip_size_kb": 64, 00:11:28.928 "state": "configuring", 00:11:28.928 "raid_level": "raid0", 00:11:28.928 "superblock": true, 00:11:28.928 "num_base_bdevs": 3, 00:11:28.928 "num_base_bdevs_discovered": 0, 00:11:28.928 "num_base_bdevs_operational": 3, 00:11:28.928 "base_bdevs_list": [ 00:11:28.928 { 00:11:28.928 "name": "BaseBdev1", 00:11:28.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.928 "is_configured": false, 00:11:28.928 "data_offset": 0, 00:11:28.928 "data_size": 0 00:11:28.928 }, 00:11:28.928 { 00:11:28.928 "name": "BaseBdev2", 00:11:28.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.928 "is_configured": false, 00:11:28.928 "data_offset": 0, 00:11:28.928 "data_size": 0 00:11:28.928 }, 00:11:28.928 { 00:11:28.928 "name": "BaseBdev3", 00:11:28.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.928 "is_configured": false, 00:11:28.928 "data_offset": 0, 00:11:28.928 "data_size": 0 00:11:28.928 } 00:11:28.928 ] 00:11:28.928 }' 00:11:28.928 18:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.928 18:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.497 18:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:29.497 18:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.497 18:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.497 [2024-12-06 18:09:54.771419] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:29.497 [2024-12-06 18:09:54.771601] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:29.497 18:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.497 18:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:29.497 18:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.497 18:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.497 [2024-12-06 18:09:54.779403] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:29.497 [2024-12-06 18:09:54.779475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:29.497 [2024-12-06 18:09:54.779491] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:29.497 [2024-12-06 18:09:54.779507] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:29.497 [2024-12-06 18:09:54.779517] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:29.497 [2024-12-06 18:09:54.779531] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:29.497 18:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.497 18:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:29.497 18:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.497 18:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.497 [2024-12-06 18:09:54.824340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:29.497 BaseBdev1 00:11:29.497 18:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.498 18:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:29.498 18:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:29.498 18:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:29.498 18:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:29.498 18:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:29.498 18:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:29.498 18:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:29.498 18:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.498 18:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.498 18:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.498 18:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:29.498 18:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.498 18:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.498 [ 00:11:29.498 { 00:11:29.498 "name": "BaseBdev1", 00:11:29.498 "aliases": [ 00:11:29.498 "8f994985-5e99-4cbd-a541-e60337636fb2" 00:11:29.498 ], 00:11:29.498 "product_name": "Malloc disk", 00:11:29.498 "block_size": 512, 00:11:29.498 "num_blocks": 65536, 00:11:29.498 "uuid": "8f994985-5e99-4cbd-a541-e60337636fb2", 00:11:29.498 "assigned_rate_limits": { 00:11:29.498 "rw_ios_per_sec": 0, 00:11:29.498 "rw_mbytes_per_sec": 0, 00:11:29.498 "r_mbytes_per_sec": 0, 00:11:29.498 "w_mbytes_per_sec": 0 00:11:29.498 }, 00:11:29.498 "claimed": true, 00:11:29.498 "claim_type": "exclusive_write", 00:11:29.498 "zoned": false, 00:11:29.498 "supported_io_types": { 00:11:29.498 "read": true, 00:11:29.498 "write": true, 00:11:29.498 "unmap": true, 00:11:29.498 "flush": true, 00:11:29.498 "reset": true, 00:11:29.498 "nvme_admin": false, 00:11:29.498 "nvme_io": false, 00:11:29.498 "nvme_io_md": false, 00:11:29.498 "write_zeroes": true, 00:11:29.498 "zcopy": true, 00:11:29.498 "get_zone_info": false, 00:11:29.498 "zone_management": false, 00:11:29.498 "zone_append": false, 00:11:29.498 "compare": false, 00:11:29.498 "compare_and_write": false, 00:11:29.498 "abort": true, 00:11:29.498 "seek_hole": false, 00:11:29.498 "seek_data": false, 00:11:29.498 "copy": true, 00:11:29.498 "nvme_iov_md": false 00:11:29.498 }, 00:11:29.498 "memory_domains": [ 00:11:29.498 { 00:11:29.498 "dma_device_id": "system", 00:11:29.498 "dma_device_type": 1 00:11:29.498 }, 00:11:29.498 { 00:11:29.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.498 "dma_device_type": 2 00:11:29.498 } 00:11:29.498 ], 00:11:29.498 "driver_specific": {} 00:11:29.498 } 00:11:29.498 ] 00:11:29.498 18:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.498 18:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:29.498 18:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:29.498 18:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.498 18:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.498 18:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:29.498 18:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.498 18:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:29.498 18:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.498 18:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.498 18:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.498 18:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.498 18:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.498 18:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.498 18:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.498 18:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.498 18:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.498 18:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.498 "name": "Existed_Raid", 00:11:29.498 "uuid": "72b14723-af84-40e9-ab2e-32e04d99f013", 00:11:29.498 "strip_size_kb": 64, 00:11:29.498 "state": "configuring", 00:11:29.498 "raid_level": "raid0", 00:11:29.498 "superblock": true, 00:11:29.498 "num_base_bdevs": 3, 00:11:29.498 "num_base_bdevs_discovered": 1, 00:11:29.498 "num_base_bdevs_operational": 3, 00:11:29.498 "base_bdevs_list": [ 00:11:29.498 { 00:11:29.498 "name": "BaseBdev1", 00:11:29.498 "uuid": "8f994985-5e99-4cbd-a541-e60337636fb2", 00:11:29.498 "is_configured": true, 00:11:29.498 "data_offset": 2048, 00:11:29.498 "data_size": 63488 00:11:29.498 }, 00:11:29.498 { 00:11:29.498 "name": "BaseBdev2", 00:11:29.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.498 "is_configured": false, 00:11:29.498 "data_offset": 0, 00:11:29.498 "data_size": 0 00:11:29.498 }, 00:11:29.498 { 00:11:29.498 "name": "BaseBdev3", 00:11:29.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.498 "is_configured": false, 00:11:29.498 "data_offset": 0, 00:11:29.498 "data_size": 0 00:11:29.498 } 00:11:29.498 ] 00:11:29.498 }' 00:11:29.498 18:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.498 18:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.064 18:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:30.064 18:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.064 18:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.064 [2024-12-06 18:09:55.368587] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:30.064 [2024-12-06 18:09:55.368647] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:30.064 18:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.064 18:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:30.064 18:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.064 18:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.064 [2024-12-06 18:09:55.376639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:30.064 [2024-12-06 18:09:55.379190] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:30.064 [2024-12-06 18:09:55.379415] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:30.064 [2024-12-06 18:09:55.379444] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:30.064 [2024-12-06 18:09:55.379462] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:30.064 18:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.064 18:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:30.064 18:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:30.064 18:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:30.064 18:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.064 18:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:30.064 18:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:30.064 18:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:30.064 18:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:30.064 18:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.064 18:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.064 18:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.064 18:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.064 18:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.064 18:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.064 18:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.064 18:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.064 18:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.064 18:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.064 "name": "Existed_Raid", 00:11:30.064 "uuid": "a2c7db66-278d-4695-9005-9388a67e6f4f", 00:11:30.064 "strip_size_kb": 64, 00:11:30.064 "state": "configuring", 00:11:30.064 "raid_level": "raid0", 00:11:30.064 "superblock": true, 00:11:30.064 "num_base_bdevs": 3, 00:11:30.064 "num_base_bdevs_discovered": 1, 00:11:30.064 "num_base_bdevs_operational": 3, 00:11:30.064 "base_bdevs_list": [ 00:11:30.064 { 00:11:30.064 "name": "BaseBdev1", 00:11:30.064 "uuid": "8f994985-5e99-4cbd-a541-e60337636fb2", 00:11:30.064 "is_configured": true, 00:11:30.064 "data_offset": 2048, 00:11:30.064 "data_size": 63488 00:11:30.064 }, 00:11:30.064 { 00:11:30.064 "name": "BaseBdev2", 00:11:30.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.064 "is_configured": false, 00:11:30.064 "data_offset": 0, 00:11:30.064 "data_size": 0 00:11:30.064 }, 00:11:30.064 { 00:11:30.064 "name": "BaseBdev3", 00:11:30.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.064 "is_configured": false, 00:11:30.064 "data_offset": 0, 00:11:30.064 "data_size": 0 00:11:30.064 } 00:11:30.064 ] 00:11:30.064 }' 00:11:30.064 18:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.064 18:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.631 18:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:30.631 18:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.631 18:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.631 [2024-12-06 18:09:55.925797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:30.631 BaseBdev2 00:11:30.631 18:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.631 18:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:30.631 18:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:30.631 18:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:30.631 18:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:30.631 18:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:30.631 18:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:30.631 18:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:30.631 18:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.631 18:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.631 18:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.631 18:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:30.631 18:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.631 18:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.631 [ 00:11:30.631 { 00:11:30.631 "name": "BaseBdev2", 00:11:30.631 "aliases": [ 00:11:30.631 "74d15356-b79f-4971-92e6-b44cdf7518af" 00:11:30.631 ], 00:11:30.631 "product_name": "Malloc disk", 00:11:30.631 "block_size": 512, 00:11:30.631 "num_blocks": 65536, 00:11:30.631 "uuid": "74d15356-b79f-4971-92e6-b44cdf7518af", 00:11:30.631 "assigned_rate_limits": { 00:11:30.631 "rw_ios_per_sec": 0, 00:11:30.631 "rw_mbytes_per_sec": 0, 00:11:30.631 "r_mbytes_per_sec": 0, 00:11:30.631 "w_mbytes_per_sec": 0 00:11:30.631 }, 00:11:30.631 "claimed": true, 00:11:30.631 "claim_type": "exclusive_write", 00:11:30.631 "zoned": false, 00:11:30.631 "supported_io_types": { 00:11:30.631 "read": true, 00:11:30.631 "write": true, 00:11:30.631 "unmap": true, 00:11:30.631 "flush": true, 00:11:30.631 "reset": true, 00:11:30.631 "nvme_admin": false, 00:11:30.631 "nvme_io": false, 00:11:30.631 "nvme_io_md": false, 00:11:30.631 "write_zeroes": true, 00:11:30.631 "zcopy": true, 00:11:30.631 "get_zone_info": false, 00:11:30.631 "zone_management": false, 00:11:30.631 "zone_append": false, 00:11:30.631 "compare": false, 00:11:30.631 "compare_and_write": false, 00:11:30.631 "abort": true, 00:11:30.631 "seek_hole": false, 00:11:30.631 "seek_data": false, 00:11:30.631 "copy": true, 00:11:30.631 "nvme_iov_md": false 00:11:30.631 }, 00:11:30.631 "memory_domains": [ 00:11:30.631 { 00:11:30.631 "dma_device_id": "system", 00:11:30.631 "dma_device_type": 1 00:11:30.631 }, 00:11:30.631 { 00:11:30.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.631 "dma_device_type": 2 00:11:30.631 } 00:11:30.631 ], 00:11:30.631 "driver_specific": {} 00:11:30.631 } 00:11:30.631 ] 00:11:30.632 18:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.632 18:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:30.632 18:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:30.632 18:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:30.632 18:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:30.632 18:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.632 18:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:30.632 18:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:30.632 18:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:30.632 18:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:30.632 18:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.632 18:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.632 18:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.632 18:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.632 18:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.632 18:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.632 18:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.632 18:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.632 18:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.632 18:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.632 "name": "Existed_Raid", 00:11:30.632 "uuid": "a2c7db66-278d-4695-9005-9388a67e6f4f", 00:11:30.632 "strip_size_kb": 64, 00:11:30.632 "state": "configuring", 00:11:30.632 "raid_level": "raid0", 00:11:30.632 "superblock": true, 00:11:30.632 "num_base_bdevs": 3, 00:11:30.632 "num_base_bdevs_discovered": 2, 00:11:30.632 "num_base_bdevs_operational": 3, 00:11:30.632 "base_bdevs_list": [ 00:11:30.632 { 00:11:30.632 "name": "BaseBdev1", 00:11:30.632 "uuid": "8f994985-5e99-4cbd-a541-e60337636fb2", 00:11:30.632 "is_configured": true, 00:11:30.632 "data_offset": 2048, 00:11:30.632 "data_size": 63488 00:11:30.632 }, 00:11:30.632 { 00:11:30.632 "name": "BaseBdev2", 00:11:30.632 "uuid": "74d15356-b79f-4971-92e6-b44cdf7518af", 00:11:30.632 "is_configured": true, 00:11:30.632 "data_offset": 2048, 00:11:30.632 "data_size": 63488 00:11:30.632 }, 00:11:30.632 { 00:11:30.632 "name": "BaseBdev3", 00:11:30.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.632 "is_configured": false, 00:11:30.632 "data_offset": 0, 00:11:30.632 "data_size": 0 00:11:30.632 } 00:11:30.632 ] 00:11:30.632 }' 00:11:30.632 18:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.632 18:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.200 18:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:31.200 18:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.200 18:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.200 [2024-12-06 18:09:56.526652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:31.200 [2024-12-06 18:09:56.526998] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:31.200 [2024-12-06 18:09:56.527027] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:31.200 [2024-12-06 18:09:56.527353] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:31.200 [2024-12-06 18:09:56.527551] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:31.200 [2024-12-06 18:09:56.527568] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:31.200 [2024-12-06 18:09:56.527742] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:31.200 BaseBdev3 00:11:31.200 18:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.200 18:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:31.200 18:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:31.200 18:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:31.200 18:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:31.200 18:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:31.200 18:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:31.200 18:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:31.200 18:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.200 18:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.200 18:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.200 18:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:31.200 18:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.200 18:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.200 [ 00:11:31.200 { 00:11:31.200 "name": "BaseBdev3", 00:11:31.200 "aliases": [ 00:11:31.200 "594bcb67-2c40-4d77-9235-d3d934bedc2b" 00:11:31.200 ], 00:11:31.200 "product_name": "Malloc disk", 00:11:31.200 "block_size": 512, 00:11:31.200 "num_blocks": 65536, 00:11:31.200 "uuid": "594bcb67-2c40-4d77-9235-d3d934bedc2b", 00:11:31.200 "assigned_rate_limits": { 00:11:31.200 "rw_ios_per_sec": 0, 00:11:31.200 "rw_mbytes_per_sec": 0, 00:11:31.200 "r_mbytes_per_sec": 0, 00:11:31.200 "w_mbytes_per_sec": 0 00:11:31.200 }, 00:11:31.200 "claimed": true, 00:11:31.200 "claim_type": "exclusive_write", 00:11:31.200 "zoned": false, 00:11:31.200 "supported_io_types": { 00:11:31.200 "read": true, 00:11:31.200 "write": true, 00:11:31.200 "unmap": true, 00:11:31.200 "flush": true, 00:11:31.200 "reset": true, 00:11:31.200 "nvme_admin": false, 00:11:31.200 "nvme_io": false, 00:11:31.200 "nvme_io_md": false, 00:11:31.200 "write_zeroes": true, 00:11:31.200 "zcopy": true, 00:11:31.200 "get_zone_info": false, 00:11:31.200 "zone_management": false, 00:11:31.200 "zone_append": false, 00:11:31.200 "compare": false, 00:11:31.200 "compare_and_write": false, 00:11:31.200 "abort": true, 00:11:31.200 "seek_hole": false, 00:11:31.200 "seek_data": false, 00:11:31.200 "copy": true, 00:11:31.200 "nvme_iov_md": false 00:11:31.200 }, 00:11:31.200 "memory_domains": [ 00:11:31.200 { 00:11:31.200 "dma_device_id": "system", 00:11:31.200 "dma_device_type": 1 00:11:31.200 }, 00:11:31.200 { 00:11:31.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.200 "dma_device_type": 2 00:11:31.200 } 00:11:31.200 ], 00:11:31.200 "driver_specific": {} 00:11:31.200 } 00:11:31.200 ] 00:11:31.200 18:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.200 18:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:31.200 18:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:31.200 18:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:31.200 18:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:31.200 18:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.200 18:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:31.200 18:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:31.200 18:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:31.200 18:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:31.200 18:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.200 18:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.200 18:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.200 18:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.200 18:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.200 18:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.200 18:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.200 18:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.200 18:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.200 18:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.200 "name": "Existed_Raid", 00:11:31.200 "uuid": "a2c7db66-278d-4695-9005-9388a67e6f4f", 00:11:31.200 "strip_size_kb": 64, 00:11:31.200 "state": "online", 00:11:31.200 "raid_level": "raid0", 00:11:31.200 "superblock": true, 00:11:31.200 "num_base_bdevs": 3, 00:11:31.200 "num_base_bdevs_discovered": 3, 00:11:31.200 "num_base_bdevs_operational": 3, 00:11:31.200 "base_bdevs_list": [ 00:11:31.201 { 00:11:31.201 "name": "BaseBdev1", 00:11:31.201 "uuid": "8f994985-5e99-4cbd-a541-e60337636fb2", 00:11:31.201 "is_configured": true, 00:11:31.201 "data_offset": 2048, 00:11:31.201 "data_size": 63488 00:11:31.201 }, 00:11:31.201 { 00:11:31.201 "name": "BaseBdev2", 00:11:31.201 "uuid": "74d15356-b79f-4971-92e6-b44cdf7518af", 00:11:31.201 "is_configured": true, 00:11:31.201 "data_offset": 2048, 00:11:31.201 "data_size": 63488 00:11:31.201 }, 00:11:31.201 { 00:11:31.201 "name": "BaseBdev3", 00:11:31.201 "uuid": "594bcb67-2c40-4d77-9235-d3d934bedc2b", 00:11:31.201 "is_configured": true, 00:11:31.201 "data_offset": 2048, 00:11:31.201 "data_size": 63488 00:11:31.201 } 00:11:31.201 ] 00:11:31.201 }' 00:11:31.201 18:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.201 18:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.768 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:31.768 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:31.768 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:31.768 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:31.768 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:31.768 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:31.768 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:31.768 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:31.768 18:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.768 18:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.768 [2024-12-06 18:09:57.079278] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:31.768 18:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.768 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:31.768 "name": "Existed_Raid", 00:11:31.768 "aliases": [ 00:11:31.768 "a2c7db66-278d-4695-9005-9388a67e6f4f" 00:11:31.768 ], 00:11:31.768 "product_name": "Raid Volume", 00:11:31.768 "block_size": 512, 00:11:31.768 "num_blocks": 190464, 00:11:31.768 "uuid": "a2c7db66-278d-4695-9005-9388a67e6f4f", 00:11:31.768 "assigned_rate_limits": { 00:11:31.768 "rw_ios_per_sec": 0, 00:11:31.768 "rw_mbytes_per_sec": 0, 00:11:31.768 "r_mbytes_per_sec": 0, 00:11:31.768 "w_mbytes_per_sec": 0 00:11:31.768 }, 00:11:31.768 "claimed": false, 00:11:31.768 "zoned": false, 00:11:31.768 "supported_io_types": { 00:11:31.768 "read": true, 00:11:31.768 "write": true, 00:11:31.768 "unmap": true, 00:11:31.768 "flush": true, 00:11:31.768 "reset": true, 00:11:31.768 "nvme_admin": false, 00:11:31.768 "nvme_io": false, 00:11:31.768 "nvme_io_md": false, 00:11:31.768 "write_zeroes": true, 00:11:31.768 "zcopy": false, 00:11:31.768 "get_zone_info": false, 00:11:31.768 "zone_management": false, 00:11:31.768 "zone_append": false, 00:11:31.768 "compare": false, 00:11:31.768 "compare_and_write": false, 00:11:31.768 "abort": false, 00:11:31.768 "seek_hole": false, 00:11:31.768 "seek_data": false, 00:11:31.768 "copy": false, 00:11:31.768 "nvme_iov_md": false 00:11:31.768 }, 00:11:31.768 "memory_domains": [ 00:11:31.768 { 00:11:31.768 "dma_device_id": "system", 00:11:31.768 "dma_device_type": 1 00:11:31.768 }, 00:11:31.768 { 00:11:31.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.768 "dma_device_type": 2 00:11:31.768 }, 00:11:31.768 { 00:11:31.768 "dma_device_id": "system", 00:11:31.768 "dma_device_type": 1 00:11:31.768 }, 00:11:31.768 { 00:11:31.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.768 "dma_device_type": 2 00:11:31.768 }, 00:11:31.768 { 00:11:31.768 "dma_device_id": "system", 00:11:31.768 "dma_device_type": 1 00:11:31.768 }, 00:11:31.768 { 00:11:31.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.768 "dma_device_type": 2 00:11:31.768 } 00:11:31.768 ], 00:11:31.768 "driver_specific": { 00:11:31.768 "raid": { 00:11:31.768 "uuid": "a2c7db66-278d-4695-9005-9388a67e6f4f", 00:11:31.768 "strip_size_kb": 64, 00:11:31.768 "state": "online", 00:11:31.768 "raid_level": "raid0", 00:11:31.768 "superblock": true, 00:11:31.768 "num_base_bdevs": 3, 00:11:31.768 "num_base_bdevs_discovered": 3, 00:11:31.768 "num_base_bdevs_operational": 3, 00:11:31.768 "base_bdevs_list": [ 00:11:31.768 { 00:11:31.768 "name": "BaseBdev1", 00:11:31.768 "uuid": "8f994985-5e99-4cbd-a541-e60337636fb2", 00:11:31.768 "is_configured": true, 00:11:31.768 "data_offset": 2048, 00:11:31.768 "data_size": 63488 00:11:31.768 }, 00:11:31.768 { 00:11:31.768 "name": "BaseBdev2", 00:11:31.768 "uuid": "74d15356-b79f-4971-92e6-b44cdf7518af", 00:11:31.768 "is_configured": true, 00:11:31.768 "data_offset": 2048, 00:11:31.768 "data_size": 63488 00:11:31.768 }, 00:11:31.768 { 00:11:31.768 "name": "BaseBdev3", 00:11:31.768 "uuid": "594bcb67-2c40-4d77-9235-d3d934bedc2b", 00:11:31.768 "is_configured": true, 00:11:31.768 "data_offset": 2048, 00:11:31.768 "data_size": 63488 00:11:31.768 } 00:11:31.768 ] 00:11:31.768 } 00:11:31.768 } 00:11:31.768 }' 00:11:31.768 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:31.768 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:31.768 BaseBdev2 00:11:31.768 BaseBdev3' 00:11:31.768 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:31.768 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:31.768 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:31.768 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:31.768 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:31.768 18:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.768 18:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.768 18:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.768 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:31.768 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:31.768 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:31.768 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:31.768 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:31.768 18:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.768 18:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.028 18:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.028 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:32.028 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:32.028 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:32.028 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:32.028 18:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.028 18:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.028 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.028 18:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.028 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:32.028 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:32.028 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:32.028 18:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.028 18:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.028 [2024-12-06 18:09:57.394777] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:32.028 [2024-12-06 18:09:57.394828] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:32.028 [2024-12-06 18:09:57.394902] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:32.028 18:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.028 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:32.028 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:32.028 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:32.028 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:32.028 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:32.028 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:11:32.028 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.028 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:32.028 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:32.028 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:32.028 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:32.028 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.028 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.028 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.028 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.028 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.028 18:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.028 18:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.028 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.028 18:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.028 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.028 "name": "Existed_Raid", 00:11:32.028 "uuid": "a2c7db66-278d-4695-9005-9388a67e6f4f", 00:11:32.028 "strip_size_kb": 64, 00:11:32.028 "state": "offline", 00:11:32.028 "raid_level": "raid0", 00:11:32.028 "superblock": true, 00:11:32.028 "num_base_bdevs": 3, 00:11:32.028 "num_base_bdevs_discovered": 2, 00:11:32.028 "num_base_bdevs_operational": 2, 00:11:32.028 "base_bdevs_list": [ 00:11:32.028 { 00:11:32.028 "name": null, 00:11:32.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.028 "is_configured": false, 00:11:32.028 "data_offset": 0, 00:11:32.028 "data_size": 63488 00:11:32.028 }, 00:11:32.028 { 00:11:32.028 "name": "BaseBdev2", 00:11:32.028 "uuid": "74d15356-b79f-4971-92e6-b44cdf7518af", 00:11:32.028 "is_configured": true, 00:11:32.028 "data_offset": 2048, 00:11:32.028 "data_size": 63488 00:11:32.028 }, 00:11:32.028 { 00:11:32.028 "name": "BaseBdev3", 00:11:32.028 "uuid": "594bcb67-2c40-4d77-9235-d3d934bedc2b", 00:11:32.028 "is_configured": true, 00:11:32.028 "data_offset": 2048, 00:11:32.028 "data_size": 63488 00:11:32.028 } 00:11:32.028 ] 00:11:32.028 }' 00:11:32.028 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.028 18:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.597 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:32.597 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:32.597 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.597 18:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.597 18:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.597 18:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:32.597 18:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.597 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:32.597 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:32.597 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:32.597 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.597 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.597 [2024-12-06 18:09:58.027259] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:32.597 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.597 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:32.597 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:32.597 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.856 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.856 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.856 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:32.856 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.856 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:32.856 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:32.856 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:32.856 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.856 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.857 [2024-12-06 18:09:58.170986] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:32.857 [2024-12-06 18:09:58.171054] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:32.857 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.857 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:32.857 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:32.857 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.857 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.857 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.857 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:32.857 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.857 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:32.857 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:32.857 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:32.857 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:32.857 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:32.857 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:32.857 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.857 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.857 BaseBdev2 00:11:32.857 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.857 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:32.857 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:32.857 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:32.857 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:32.857 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:32.857 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:32.857 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:32.857 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.857 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.857 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.857 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:32.857 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.857 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.857 [ 00:11:32.857 { 00:11:32.857 "name": "BaseBdev2", 00:11:32.857 "aliases": [ 00:11:32.857 "ec0d4f61-b034-4f1f-b87c-dffebaf4b7fd" 00:11:32.857 ], 00:11:32.857 "product_name": "Malloc disk", 00:11:32.857 "block_size": 512, 00:11:32.857 "num_blocks": 65536, 00:11:32.857 "uuid": "ec0d4f61-b034-4f1f-b87c-dffebaf4b7fd", 00:11:32.857 "assigned_rate_limits": { 00:11:32.857 "rw_ios_per_sec": 0, 00:11:32.857 "rw_mbytes_per_sec": 0, 00:11:32.857 "r_mbytes_per_sec": 0, 00:11:32.857 "w_mbytes_per_sec": 0 00:11:32.857 }, 00:11:32.857 "claimed": false, 00:11:32.857 "zoned": false, 00:11:32.857 "supported_io_types": { 00:11:32.857 "read": true, 00:11:32.857 "write": true, 00:11:32.857 "unmap": true, 00:11:32.857 "flush": true, 00:11:32.857 "reset": true, 00:11:32.857 "nvme_admin": false, 00:11:32.857 "nvme_io": false, 00:11:32.857 "nvme_io_md": false, 00:11:32.857 "write_zeroes": true, 00:11:32.857 "zcopy": true, 00:11:33.117 "get_zone_info": false, 00:11:33.117 "zone_management": false, 00:11:33.117 "zone_append": false, 00:11:33.117 "compare": false, 00:11:33.117 "compare_and_write": false, 00:11:33.117 "abort": true, 00:11:33.117 "seek_hole": false, 00:11:33.117 "seek_data": false, 00:11:33.117 "copy": true, 00:11:33.117 "nvme_iov_md": false 00:11:33.117 }, 00:11:33.117 "memory_domains": [ 00:11:33.117 { 00:11:33.117 "dma_device_id": "system", 00:11:33.117 "dma_device_type": 1 00:11:33.117 }, 00:11:33.117 { 00:11:33.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.117 "dma_device_type": 2 00:11:33.117 } 00:11:33.117 ], 00:11:33.117 "driver_specific": {} 00:11:33.117 } 00:11:33.117 ] 00:11:33.117 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.117 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:33.117 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:33.117 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:33.117 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:33.117 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.117 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.117 BaseBdev3 00:11:33.117 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.117 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:33.117 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:33.117 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:33.117 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:33.117 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:33.117 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:33.117 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:33.117 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.117 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.117 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.117 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:33.117 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.117 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.117 [ 00:11:33.117 { 00:11:33.117 "name": "BaseBdev3", 00:11:33.117 "aliases": [ 00:11:33.117 "a0e393c0-d632-487e-b35d-0c7b5352d110" 00:11:33.117 ], 00:11:33.117 "product_name": "Malloc disk", 00:11:33.117 "block_size": 512, 00:11:33.117 "num_blocks": 65536, 00:11:33.117 "uuid": "a0e393c0-d632-487e-b35d-0c7b5352d110", 00:11:33.117 "assigned_rate_limits": { 00:11:33.117 "rw_ios_per_sec": 0, 00:11:33.117 "rw_mbytes_per_sec": 0, 00:11:33.117 "r_mbytes_per_sec": 0, 00:11:33.117 "w_mbytes_per_sec": 0 00:11:33.117 }, 00:11:33.117 "claimed": false, 00:11:33.117 "zoned": false, 00:11:33.117 "supported_io_types": { 00:11:33.117 "read": true, 00:11:33.117 "write": true, 00:11:33.117 "unmap": true, 00:11:33.117 "flush": true, 00:11:33.117 "reset": true, 00:11:33.117 "nvme_admin": false, 00:11:33.117 "nvme_io": false, 00:11:33.117 "nvme_io_md": false, 00:11:33.117 "write_zeroes": true, 00:11:33.117 "zcopy": true, 00:11:33.117 "get_zone_info": false, 00:11:33.117 "zone_management": false, 00:11:33.117 "zone_append": false, 00:11:33.117 "compare": false, 00:11:33.117 "compare_and_write": false, 00:11:33.117 "abort": true, 00:11:33.117 "seek_hole": false, 00:11:33.117 "seek_data": false, 00:11:33.117 "copy": true, 00:11:33.117 "nvme_iov_md": false 00:11:33.117 }, 00:11:33.117 "memory_domains": [ 00:11:33.117 { 00:11:33.117 "dma_device_id": "system", 00:11:33.117 "dma_device_type": 1 00:11:33.117 }, 00:11:33.117 { 00:11:33.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.117 "dma_device_type": 2 00:11:33.117 } 00:11:33.117 ], 00:11:33.117 "driver_specific": {} 00:11:33.117 } 00:11:33.117 ] 00:11:33.118 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.118 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:33.118 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:33.118 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:33.118 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:33.118 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.118 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.118 [2024-12-06 18:09:58.443996] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:33.118 [2024-12-06 18:09:58.444051] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:33.118 [2024-12-06 18:09:58.444081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:33.118 [2024-12-06 18:09:58.446467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:33.118 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.118 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:33.118 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.118 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.118 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:33.118 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:33.118 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:33.118 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.118 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.118 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.118 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.118 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.118 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.118 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.118 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.118 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.118 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.118 "name": "Existed_Raid", 00:11:33.118 "uuid": "262dac85-ebc4-4f99-bc6d-1e79b6a106df", 00:11:33.118 "strip_size_kb": 64, 00:11:33.118 "state": "configuring", 00:11:33.118 "raid_level": "raid0", 00:11:33.118 "superblock": true, 00:11:33.118 "num_base_bdevs": 3, 00:11:33.118 "num_base_bdevs_discovered": 2, 00:11:33.118 "num_base_bdevs_operational": 3, 00:11:33.118 "base_bdevs_list": [ 00:11:33.118 { 00:11:33.118 "name": "BaseBdev1", 00:11:33.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.118 "is_configured": false, 00:11:33.118 "data_offset": 0, 00:11:33.118 "data_size": 0 00:11:33.118 }, 00:11:33.118 { 00:11:33.118 "name": "BaseBdev2", 00:11:33.118 "uuid": "ec0d4f61-b034-4f1f-b87c-dffebaf4b7fd", 00:11:33.118 "is_configured": true, 00:11:33.118 "data_offset": 2048, 00:11:33.118 "data_size": 63488 00:11:33.118 }, 00:11:33.118 { 00:11:33.118 "name": "BaseBdev3", 00:11:33.118 "uuid": "a0e393c0-d632-487e-b35d-0c7b5352d110", 00:11:33.118 "is_configured": true, 00:11:33.118 "data_offset": 2048, 00:11:33.118 "data_size": 63488 00:11:33.118 } 00:11:33.118 ] 00:11:33.118 }' 00:11:33.118 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.118 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.688 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:33.688 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.688 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.688 [2024-12-06 18:09:58.960193] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:33.688 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.688 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:33.688 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.689 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.689 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:33.689 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:33.689 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:33.689 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.689 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.689 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.689 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.689 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.689 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.689 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.689 18:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.689 18:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.689 18:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.689 "name": "Existed_Raid", 00:11:33.689 "uuid": "262dac85-ebc4-4f99-bc6d-1e79b6a106df", 00:11:33.689 "strip_size_kb": 64, 00:11:33.689 "state": "configuring", 00:11:33.689 "raid_level": "raid0", 00:11:33.689 "superblock": true, 00:11:33.689 "num_base_bdevs": 3, 00:11:33.689 "num_base_bdevs_discovered": 1, 00:11:33.689 "num_base_bdevs_operational": 3, 00:11:33.689 "base_bdevs_list": [ 00:11:33.689 { 00:11:33.689 "name": "BaseBdev1", 00:11:33.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.689 "is_configured": false, 00:11:33.689 "data_offset": 0, 00:11:33.689 "data_size": 0 00:11:33.689 }, 00:11:33.689 { 00:11:33.689 "name": null, 00:11:33.689 "uuid": "ec0d4f61-b034-4f1f-b87c-dffebaf4b7fd", 00:11:33.689 "is_configured": false, 00:11:33.689 "data_offset": 0, 00:11:33.689 "data_size": 63488 00:11:33.689 }, 00:11:33.689 { 00:11:33.689 "name": "BaseBdev3", 00:11:33.689 "uuid": "a0e393c0-d632-487e-b35d-0c7b5352d110", 00:11:33.689 "is_configured": true, 00:11:33.689 "data_offset": 2048, 00:11:33.689 "data_size": 63488 00:11:33.689 } 00:11:33.689 ] 00:11:33.689 }' 00:11:33.689 18:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.689 18:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.258 18:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.258 18:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:34.258 18:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.258 18:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.258 18:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.258 18:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:34.258 18:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:34.258 18:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.258 18:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.258 [2024-12-06 18:09:59.571136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:34.258 BaseBdev1 00:11:34.258 18:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.258 18:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:34.258 18:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:34.258 18:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:34.258 18:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:34.258 18:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:34.258 18:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:34.258 18:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:34.258 18:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.258 18:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.258 18:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.258 18:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:34.258 18:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.258 18:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.258 [ 00:11:34.258 { 00:11:34.258 "name": "BaseBdev1", 00:11:34.258 "aliases": [ 00:11:34.258 "a6fe60e8-2f28-40d6-9bff-88f3084c07fa" 00:11:34.258 ], 00:11:34.258 "product_name": "Malloc disk", 00:11:34.258 "block_size": 512, 00:11:34.258 "num_blocks": 65536, 00:11:34.258 "uuid": "a6fe60e8-2f28-40d6-9bff-88f3084c07fa", 00:11:34.258 "assigned_rate_limits": { 00:11:34.258 "rw_ios_per_sec": 0, 00:11:34.258 "rw_mbytes_per_sec": 0, 00:11:34.258 "r_mbytes_per_sec": 0, 00:11:34.258 "w_mbytes_per_sec": 0 00:11:34.258 }, 00:11:34.258 "claimed": true, 00:11:34.258 "claim_type": "exclusive_write", 00:11:34.258 "zoned": false, 00:11:34.258 "supported_io_types": { 00:11:34.258 "read": true, 00:11:34.258 "write": true, 00:11:34.258 "unmap": true, 00:11:34.258 "flush": true, 00:11:34.258 "reset": true, 00:11:34.258 "nvme_admin": false, 00:11:34.258 "nvme_io": false, 00:11:34.258 "nvme_io_md": false, 00:11:34.258 "write_zeroes": true, 00:11:34.258 "zcopy": true, 00:11:34.258 "get_zone_info": false, 00:11:34.258 "zone_management": false, 00:11:34.258 "zone_append": false, 00:11:34.258 "compare": false, 00:11:34.258 "compare_and_write": false, 00:11:34.258 "abort": true, 00:11:34.258 "seek_hole": false, 00:11:34.258 "seek_data": false, 00:11:34.258 "copy": true, 00:11:34.258 "nvme_iov_md": false 00:11:34.258 }, 00:11:34.258 "memory_domains": [ 00:11:34.258 { 00:11:34.258 "dma_device_id": "system", 00:11:34.258 "dma_device_type": 1 00:11:34.258 }, 00:11:34.258 { 00:11:34.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.258 "dma_device_type": 2 00:11:34.258 } 00:11:34.258 ], 00:11:34.258 "driver_specific": {} 00:11:34.258 } 00:11:34.258 ] 00:11:34.258 18:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.258 18:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:34.258 18:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:34.258 18:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.258 18:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.258 18:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:34.258 18:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:34.258 18:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:34.258 18:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.258 18:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.258 18:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.258 18:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.258 18:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.258 18:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.258 18:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.258 18:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.258 18:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.258 18:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.258 "name": "Existed_Raid", 00:11:34.258 "uuid": "262dac85-ebc4-4f99-bc6d-1e79b6a106df", 00:11:34.258 "strip_size_kb": 64, 00:11:34.258 "state": "configuring", 00:11:34.258 "raid_level": "raid0", 00:11:34.258 "superblock": true, 00:11:34.258 "num_base_bdevs": 3, 00:11:34.258 "num_base_bdevs_discovered": 2, 00:11:34.258 "num_base_bdevs_operational": 3, 00:11:34.258 "base_bdevs_list": [ 00:11:34.258 { 00:11:34.258 "name": "BaseBdev1", 00:11:34.258 "uuid": "a6fe60e8-2f28-40d6-9bff-88f3084c07fa", 00:11:34.258 "is_configured": true, 00:11:34.258 "data_offset": 2048, 00:11:34.258 "data_size": 63488 00:11:34.258 }, 00:11:34.258 { 00:11:34.258 "name": null, 00:11:34.258 "uuid": "ec0d4f61-b034-4f1f-b87c-dffebaf4b7fd", 00:11:34.258 "is_configured": false, 00:11:34.258 "data_offset": 0, 00:11:34.258 "data_size": 63488 00:11:34.258 }, 00:11:34.258 { 00:11:34.258 "name": "BaseBdev3", 00:11:34.258 "uuid": "a0e393c0-d632-487e-b35d-0c7b5352d110", 00:11:34.258 "is_configured": true, 00:11:34.258 "data_offset": 2048, 00:11:34.258 "data_size": 63488 00:11:34.258 } 00:11:34.258 ] 00:11:34.258 }' 00:11:34.258 18:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.258 18:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.827 18:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.827 18:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.827 18:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.827 18:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:34.827 18:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.827 18:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:34.827 18:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:34.827 18:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.827 18:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.827 [2024-12-06 18:10:00.155329] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:34.827 18:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.827 18:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:34.827 18:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.827 18:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.827 18:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:34.827 18:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:34.827 18:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:34.827 18:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.827 18:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.827 18:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.827 18:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.827 18:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.827 18:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.827 18:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.827 18:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.827 18:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.827 18:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.827 "name": "Existed_Raid", 00:11:34.827 "uuid": "262dac85-ebc4-4f99-bc6d-1e79b6a106df", 00:11:34.827 "strip_size_kb": 64, 00:11:34.827 "state": "configuring", 00:11:34.827 "raid_level": "raid0", 00:11:34.827 "superblock": true, 00:11:34.827 "num_base_bdevs": 3, 00:11:34.827 "num_base_bdevs_discovered": 1, 00:11:34.827 "num_base_bdevs_operational": 3, 00:11:34.827 "base_bdevs_list": [ 00:11:34.827 { 00:11:34.827 "name": "BaseBdev1", 00:11:34.827 "uuid": "a6fe60e8-2f28-40d6-9bff-88f3084c07fa", 00:11:34.827 "is_configured": true, 00:11:34.827 "data_offset": 2048, 00:11:34.827 "data_size": 63488 00:11:34.827 }, 00:11:34.827 { 00:11:34.827 "name": null, 00:11:34.827 "uuid": "ec0d4f61-b034-4f1f-b87c-dffebaf4b7fd", 00:11:34.827 "is_configured": false, 00:11:34.827 "data_offset": 0, 00:11:34.827 "data_size": 63488 00:11:34.827 }, 00:11:34.827 { 00:11:34.827 "name": null, 00:11:34.827 "uuid": "a0e393c0-d632-487e-b35d-0c7b5352d110", 00:11:34.828 "is_configured": false, 00:11:34.828 "data_offset": 0, 00:11:34.828 "data_size": 63488 00:11:34.828 } 00:11:34.828 ] 00:11:34.828 }' 00:11:34.828 18:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.828 18:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.395 18:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.395 18:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.395 18:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.395 18:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:35.395 18:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.395 18:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:35.395 18:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:35.395 18:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.395 18:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.395 [2024-12-06 18:10:00.727569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:35.395 18:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.395 18:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:35.395 18:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.396 18:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.396 18:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:35.396 18:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.396 18:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:35.396 18:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.396 18:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.396 18:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.396 18:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.396 18:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.396 18:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.396 18:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.396 18:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.396 18:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.396 18:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.396 "name": "Existed_Raid", 00:11:35.396 "uuid": "262dac85-ebc4-4f99-bc6d-1e79b6a106df", 00:11:35.396 "strip_size_kb": 64, 00:11:35.396 "state": "configuring", 00:11:35.396 "raid_level": "raid0", 00:11:35.396 "superblock": true, 00:11:35.396 "num_base_bdevs": 3, 00:11:35.396 "num_base_bdevs_discovered": 2, 00:11:35.396 "num_base_bdevs_operational": 3, 00:11:35.396 "base_bdevs_list": [ 00:11:35.396 { 00:11:35.396 "name": "BaseBdev1", 00:11:35.396 "uuid": "a6fe60e8-2f28-40d6-9bff-88f3084c07fa", 00:11:35.396 "is_configured": true, 00:11:35.396 "data_offset": 2048, 00:11:35.396 "data_size": 63488 00:11:35.396 }, 00:11:35.396 { 00:11:35.396 "name": null, 00:11:35.396 "uuid": "ec0d4f61-b034-4f1f-b87c-dffebaf4b7fd", 00:11:35.396 "is_configured": false, 00:11:35.396 "data_offset": 0, 00:11:35.396 "data_size": 63488 00:11:35.396 }, 00:11:35.396 { 00:11:35.396 "name": "BaseBdev3", 00:11:35.396 "uuid": "a0e393c0-d632-487e-b35d-0c7b5352d110", 00:11:35.396 "is_configured": true, 00:11:35.396 "data_offset": 2048, 00:11:35.396 "data_size": 63488 00:11:35.396 } 00:11:35.396 ] 00:11:35.396 }' 00:11:35.396 18:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.396 18:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.962 18:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.962 18:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:35.962 18:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.962 18:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.962 18:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.962 18:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:35.962 18:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:35.962 18:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.962 18:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.962 [2024-12-06 18:10:01.303769] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:35.962 18:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.962 18:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:35.962 18:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.962 18:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.962 18:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:35.962 18:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.962 18:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:35.962 18:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.962 18:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.962 18:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.962 18:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.962 18:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.962 18:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.962 18:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.962 18:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.962 18:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.962 18:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.962 "name": "Existed_Raid", 00:11:35.962 "uuid": "262dac85-ebc4-4f99-bc6d-1e79b6a106df", 00:11:35.962 "strip_size_kb": 64, 00:11:35.962 "state": "configuring", 00:11:35.962 "raid_level": "raid0", 00:11:35.962 "superblock": true, 00:11:35.962 "num_base_bdevs": 3, 00:11:35.962 "num_base_bdevs_discovered": 1, 00:11:35.962 "num_base_bdevs_operational": 3, 00:11:35.962 "base_bdevs_list": [ 00:11:35.962 { 00:11:35.962 "name": null, 00:11:35.962 "uuid": "a6fe60e8-2f28-40d6-9bff-88f3084c07fa", 00:11:35.962 "is_configured": false, 00:11:35.962 "data_offset": 0, 00:11:35.962 "data_size": 63488 00:11:35.962 }, 00:11:35.962 { 00:11:35.962 "name": null, 00:11:35.962 "uuid": "ec0d4f61-b034-4f1f-b87c-dffebaf4b7fd", 00:11:35.962 "is_configured": false, 00:11:35.962 "data_offset": 0, 00:11:35.962 "data_size": 63488 00:11:35.962 }, 00:11:35.962 { 00:11:35.962 "name": "BaseBdev3", 00:11:35.962 "uuid": "a0e393c0-d632-487e-b35d-0c7b5352d110", 00:11:35.962 "is_configured": true, 00:11:35.962 "data_offset": 2048, 00:11:35.962 "data_size": 63488 00:11:35.962 } 00:11:35.962 ] 00:11:35.962 }' 00:11:35.962 18:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.962 18:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.528 18:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:36.528 18:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.528 18:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.528 18:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.528 18:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.528 18:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:36.528 18:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:36.528 18:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.528 18:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.528 [2024-12-06 18:10:01.960062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:36.528 18:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.528 18:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:36.528 18:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.528 18:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.528 18:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:36.528 18:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:36.528 18:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:36.528 18:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.528 18:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.528 18:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.528 18:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.528 18:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.528 18:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.528 18:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.528 18:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.528 18:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.528 18:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.528 "name": "Existed_Raid", 00:11:36.528 "uuid": "262dac85-ebc4-4f99-bc6d-1e79b6a106df", 00:11:36.528 "strip_size_kb": 64, 00:11:36.528 "state": "configuring", 00:11:36.528 "raid_level": "raid0", 00:11:36.528 "superblock": true, 00:11:36.528 "num_base_bdevs": 3, 00:11:36.528 "num_base_bdevs_discovered": 2, 00:11:36.528 "num_base_bdevs_operational": 3, 00:11:36.528 "base_bdevs_list": [ 00:11:36.528 { 00:11:36.528 "name": null, 00:11:36.528 "uuid": "a6fe60e8-2f28-40d6-9bff-88f3084c07fa", 00:11:36.528 "is_configured": false, 00:11:36.528 "data_offset": 0, 00:11:36.528 "data_size": 63488 00:11:36.528 }, 00:11:36.528 { 00:11:36.528 "name": "BaseBdev2", 00:11:36.528 "uuid": "ec0d4f61-b034-4f1f-b87c-dffebaf4b7fd", 00:11:36.528 "is_configured": true, 00:11:36.528 "data_offset": 2048, 00:11:36.528 "data_size": 63488 00:11:36.528 }, 00:11:36.528 { 00:11:36.529 "name": "BaseBdev3", 00:11:36.529 "uuid": "a0e393c0-d632-487e-b35d-0c7b5352d110", 00:11:36.529 "is_configured": true, 00:11:36.529 "data_offset": 2048, 00:11:36.529 "data_size": 63488 00:11:36.529 } 00:11:36.529 ] 00:11:36.529 }' 00:11:36.529 18:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.529 18:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.095 18:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.095 18:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.095 18:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.095 18:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:37.095 18:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.095 18:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:37.095 18:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:37.095 18:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.095 18:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.095 18:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.095 18:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.095 18:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a6fe60e8-2f28-40d6-9bff-88f3084c07fa 00:11:37.095 18:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.095 18:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.354 [2024-12-06 18:10:02.624497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:37.354 [2024-12-06 18:10:02.624808] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:37.354 [2024-12-06 18:10:02.624850] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:37.354 NewBaseBdev 00:11:37.354 [2024-12-06 18:10:02.625162] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:37.354 [2024-12-06 18:10:02.625340] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:37.354 [2024-12-06 18:10:02.625356] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:37.354 [2024-12-06 18:10:02.625516] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:37.354 18:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.354 18:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:37.354 18:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:37.354 18:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:37.354 18:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:37.354 18:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:37.354 18:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:37.354 18:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:37.354 18:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.354 18:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.354 18:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.354 18:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:37.354 18:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.354 18:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.354 [ 00:11:37.354 { 00:11:37.354 "name": "NewBaseBdev", 00:11:37.354 "aliases": [ 00:11:37.354 "a6fe60e8-2f28-40d6-9bff-88f3084c07fa" 00:11:37.354 ], 00:11:37.354 "product_name": "Malloc disk", 00:11:37.354 "block_size": 512, 00:11:37.354 "num_blocks": 65536, 00:11:37.354 "uuid": "a6fe60e8-2f28-40d6-9bff-88f3084c07fa", 00:11:37.354 "assigned_rate_limits": { 00:11:37.354 "rw_ios_per_sec": 0, 00:11:37.354 "rw_mbytes_per_sec": 0, 00:11:37.354 "r_mbytes_per_sec": 0, 00:11:37.354 "w_mbytes_per_sec": 0 00:11:37.354 }, 00:11:37.354 "claimed": true, 00:11:37.354 "claim_type": "exclusive_write", 00:11:37.354 "zoned": false, 00:11:37.354 "supported_io_types": { 00:11:37.354 "read": true, 00:11:37.354 "write": true, 00:11:37.354 "unmap": true, 00:11:37.354 "flush": true, 00:11:37.354 "reset": true, 00:11:37.354 "nvme_admin": false, 00:11:37.354 "nvme_io": false, 00:11:37.354 "nvme_io_md": false, 00:11:37.354 "write_zeroes": true, 00:11:37.354 "zcopy": true, 00:11:37.354 "get_zone_info": false, 00:11:37.354 "zone_management": false, 00:11:37.354 "zone_append": false, 00:11:37.354 "compare": false, 00:11:37.354 "compare_and_write": false, 00:11:37.354 "abort": true, 00:11:37.354 "seek_hole": false, 00:11:37.354 "seek_data": false, 00:11:37.354 "copy": true, 00:11:37.354 "nvme_iov_md": false 00:11:37.354 }, 00:11:37.354 "memory_domains": [ 00:11:37.354 { 00:11:37.354 "dma_device_id": "system", 00:11:37.354 "dma_device_type": 1 00:11:37.354 }, 00:11:37.354 { 00:11:37.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.354 "dma_device_type": 2 00:11:37.354 } 00:11:37.354 ], 00:11:37.354 "driver_specific": {} 00:11:37.354 } 00:11:37.354 ] 00:11:37.354 18:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.354 18:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:37.354 18:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:37.354 18:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.354 18:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:37.354 18:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:37.354 18:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:37.354 18:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:37.354 18:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.354 18:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.354 18:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.354 18:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.354 18:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.354 18:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.354 18:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.354 18:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.354 18:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.354 18:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.354 "name": "Existed_Raid", 00:11:37.354 "uuid": "262dac85-ebc4-4f99-bc6d-1e79b6a106df", 00:11:37.354 "strip_size_kb": 64, 00:11:37.354 "state": "online", 00:11:37.354 "raid_level": "raid0", 00:11:37.354 "superblock": true, 00:11:37.354 "num_base_bdevs": 3, 00:11:37.354 "num_base_bdevs_discovered": 3, 00:11:37.354 "num_base_bdevs_operational": 3, 00:11:37.354 "base_bdevs_list": [ 00:11:37.354 { 00:11:37.354 "name": "NewBaseBdev", 00:11:37.354 "uuid": "a6fe60e8-2f28-40d6-9bff-88f3084c07fa", 00:11:37.354 "is_configured": true, 00:11:37.354 "data_offset": 2048, 00:11:37.354 "data_size": 63488 00:11:37.354 }, 00:11:37.354 { 00:11:37.354 "name": "BaseBdev2", 00:11:37.354 "uuid": "ec0d4f61-b034-4f1f-b87c-dffebaf4b7fd", 00:11:37.354 "is_configured": true, 00:11:37.354 "data_offset": 2048, 00:11:37.354 "data_size": 63488 00:11:37.354 }, 00:11:37.354 { 00:11:37.354 "name": "BaseBdev3", 00:11:37.354 "uuid": "a0e393c0-d632-487e-b35d-0c7b5352d110", 00:11:37.354 "is_configured": true, 00:11:37.354 "data_offset": 2048, 00:11:37.354 "data_size": 63488 00:11:37.354 } 00:11:37.354 ] 00:11:37.354 }' 00:11:37.354 18:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.354 18:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.922 18:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:37.922 18:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:37.922 18:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:37.922 18:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:37.922 18:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:37.922 18:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:37.922 18:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:37.922 18:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.922 18:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:37.922 18:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.922 [2024-12-06 18:10:03.161137] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:37.922 18:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.922 18:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:37.922 "name": "Existed_Raid", 00:11:37.922 "aliases": [ 00:11:37.922 "262dac85-ebc4-4f99-bc6d-1e79b6a106df" 00:11:37.922 ], 00:11:37.922 "product_name": "Raid Volume", 00:11:37.922 "block_size": 512, 00:11:37.922 "num_blocks": 190464, 00:11:37.922 "uuid": "262dac85-ebc4-4f99-bc6d-1e79b6a106df", 00:11:37.922 "assigned_rate_limits": { 00:11:37.922 "rw_ios_per_sec": 0, 00:11:37.922 "rw_mbytes_per_sec": 0, 00:11:37.922 "r_mbytes_per_sec": 0, 00:11:37.922 "w_mbytes_per_sec": 0 00:11:37.922 }, 00:11:37.922 "claimed": false, 00:11:37.922 "zoned": false, 00:11:37.922 "supported_io_types": { 00:11:37.922 "read": true, 00:11:37.922 "write": true, 00:11:37.922 "unmap": true, 00:11:37.922 "flush": true, 00:11:37.922 "reset": true, 00:11:37.922 "nvme_admin": false, 00:11:37.922 "nvme_io": false, 00:11:37.922 "nvme_io_md": false, 00:11:37.922 "write_zeroes": true, 00:11:37.922 "zcopy": false, 00:11:37.922 "get_zone_info": false, 00:11:37.922 "zone_management": false, 00:11:37.922 "zone_append": false, 00:11:37.922 "compare": false, 00:11:37.922 "compare_and_write": false, 00:11:37.922 "abort": false, 00:11:37.922 "seek_hole": false, 00:11:37.922 "seek_data": false, 00:11:37.922 "copy": false, 00:11:37.922 "nvme_iov_md": false 00:11:37.922 }, 00:11:37.922 "memory_domains": [ 00:11:37.922 { 00:11:37.922 "dma_device_id": "system", 00:11:37.922 "dma_device_type": 1 00:11:37.922 }, 00:11:37.922 { 00:11:37.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.922 "dma_device_type": 2 00:11:37.922 }, 00:11:37.922 { 00:11:37.922 "dma_device_id": "system", 00:11:37.922 "dma_device_type": 1 00:11:37.922 }, 00:11:37.922 { 00:11:37.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.922 "dma_device_type": 2 00:11:37.922 }, 00:11:37.922 { 00:11:37.922 "dma_device_id": "system", 00:11:37.922 "dma_device_type": 1 00:11:37.922 }, 00:11:37.922 { 00:11:37.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.922 "dma_device_type": 2 00:11:37.922 } 00:11:37.922 ], 00:11:37.922 "driver_specific": { 00:11:37.922 "raid": { 00:11:37.922 "uuid": "262dac85-ebc4-4f99-bc6d-1e79b6a106df", 00:11:37.922 "strip_size_kb": 64, 00:11:37.922 "state": "online", 00:11:37.922 "raid_level": "raid0", 00:11:37.922 "superblock": true, 00:11:37.922 "num_base_bdevs": 3, 00:11:37.922 "num_base_bdevs_discovered": 3, 00:11:37.922 "num_base_bdevs_operational": 3, 00:11:37.922 "base_bdevs_list": [ 00:11:37.922 { 00:11:37.922 "name": "NewBaseBdev", 00:11:37.922 "uuid": "a6fe60e8-2f28-40d6-9bff-88f3084c07fa", 00:11:37.922 "is_configured": true, 00:11:37.922 "data_offset": 2048, 00:11:37.922 "data_size": 63488 00:11:37.922 }, 00:11:37.922 { 00:11:37.922 "name": "BaseBdev2", 00:11:37.922 "uuid": "ec0d4f61-b034-4f1f-b87c-dffebaf4b7fd", 00:11:37.922 "is_configured": true, 00:11:37.922 "data_offset": 2048, 00:11:37.922 "data_size": 63488 00:11:37.922 }, 00:11:37.922 { 00:11:37.922 "name": "BaseBdev3", 00:11:37.922 "uuid": "a0e393c0-d632-487e-b35d-0c7b5352d110", 00:11:37.922 "is_configured": true, 00:11:37.922 "data_offset": 2048, 00:11:37.922 "data_size": 63488 00:11:37.922 } 00:11:37.922 ] 00:11:37.922 } 00:11:37.922 } 00:11:37.922 }' 00:11:37.922 18:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:37.922 18:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:37.922 BaseBdev2 00:11:37.922 BaseBdev3' 00:11:37.922 18:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:37.922 18:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:37.922 18:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:37.922 18:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:37.922 18:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:37.922 18:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.922 18:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.922 18:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.922 18:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:37.923 18:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:37.923 18:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:37.923 18:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:37.923 18:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.923 18:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.923 18:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:37.923 18:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.923 18:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:37.923 18:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:37.923 18:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:38.181 18:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:38.181 18:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.181 18:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.181 18:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.181 18:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.181 18:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:38.181 18:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:38.181 18:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:38.181 18:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.181 18:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.181 [2024-12-06 18:10:03.496755] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:38.181 [2024-12-06 18:10:03.496836] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:38.181 [2024-12-06 18:10:03.496923] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:38.181 [2024-12-06 18:10:03.496993] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:38.181 [2024-12-06 18:10:03.497013] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:38.181 18:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.181 18:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64505 00:11:38.181 18:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64505 ']' 00:11:38.181 18:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64505 00:11:38.181 18:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:38.181 18:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:38.181 18:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64505 00:11:38.181 18:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:38.181 18:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:38.181 killing process with pid 64505 00:11:38.181 18:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64505' 00:11:38.181 18:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64505 00:11:38.181 [2024-12-06 18:10:03.535013] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:38.181 18:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64505 00:11:38.438 [2024-12-06 18:10:03.780226] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:39.373 18:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:39.373 00:11:39.373 real 0m11.750s 00:11:39.373 user 0m19.543s 00:11:39.373 sys 0m1.568s 00:11:39.373 18:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:39.373 18:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.373 ************************************ 00:11:39.373 END TEST raid_state_function_test_sb 00:11:39.373 ************************************ 00:11:39.632 18:10:04 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:11:39.632 18:10:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:39.632 18:10:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:39.632 18:10:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:39.632 ************************************ 00:11:39.632 START TEST raid_superblock_test 00:11:39.632 ************************************ 00:11:39.632 18:10:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:11:39.632 18:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:11:39.632 18:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:11:39.632 18:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:39.632 18:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:39.632 18:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:39.632 18:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:39.632 18:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:39.632 18:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:39.632 18:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:39.632 18:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:39.632 18:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:39.632 18:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:39.632 18:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:39.632 18:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:11:39.632 18:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:39.632 18:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:39.632 18:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65142 00:11:39.632 18:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65142 00:11:39.632 18:10:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65142 ']' 00:11:39.632 18:10:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:39.632 18:10:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.632 18:10:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:39.632 18:10:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.632 18:10:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:39.632 18:10:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.632 [2024-12-06 18:10:05.013416] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:11:39.632 [2024-12-06 18:10:05.013614] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65142 ] 00:11:39.913 [2024-12-06 18:10:05.202658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.913 [2024-12-06 18:10:05.360347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.172 [2024-12-06 18:10:05.570826] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:40.172 [2024-12-06 18:10:05.570902] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.738 malloc1 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.738 [2024-12-06 18:10:06.117686] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:40.738 [2024-12-06 18:10:06.117758] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:40.738 [2024-12-06 18:10:06.117803] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:40.738 [2024-12-06 18:10:06.117819] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:40.738 [2024-12-06 18:10:06.120660] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:40.738 [2024-12-06 18:10:06.120707] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:40.738 pt1 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.738 malloc2 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.738 [2024-12-06 18:10:06.169796] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:40.738 [2024-12-06 18:10:06.169863] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:40.738 [2024-12-06 18:10:06.169899] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:40.738 [2024-12-06 18:10:06.169913] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:40.738 [2024-12-06 18:10:06.172609] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:40.738 [2024-12-06 18:10:06.172670] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:40.738 pt2 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.738 malloc3 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.738 [2024-12-06 18:10:06.226468] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:40.738 [2024-12-06 18:10:06.226547] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:40.738 [2024-12-06 18:10:06.226579] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:40.738 [2024-12-06 18:10:06.226594] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:40.738 [2024-12-06 18:10:06.229312] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:40.738 [2024-12-06 18:10:06.229356] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:40.738 pt3 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.738 [2024-12-06 18:10:06.238515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:40.738 [2024-12-06 18:10:06.240966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:40.738 [2024-12-06 18:10:06.241082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:40.738 [2024-12-06 18:10:06.241298] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:40.738 [2024-12-06 18:10:06.241322] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:40.738 [2024-12-06 18:10:06.241633] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:40.738 [2024-12-06 18:10:06.241869] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:40.738 [2024-12-06 18:10:06.241894] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:40.738 [2024-12-06 18:10:06.242097] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.738 18:10:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.996 18:10:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.996 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.996 "name": "raid_bdev1", 00:11:40.996 "uuid": "ec51a2e6-6990-4fa1-aa3d-e0542d1f7eb2", 00:11:40.996 "strip_size_kb": 64, 00:11:40.996 "state": "online", 00:11:40.996 "raid_level": "raid0", 00:11:40.996 "superblock": true, 00:11:40.996 "num_base_bdevs": 3, 00:11:40.996 "num_base_bdevs_discovered": 3, 00:11:40.996 "num_base_bdevs_operational": 3, 00:11:40.997 "base_bdevs_list": [ 00:11:40.997 { 00:11:40.997 "name": "pt1", 00:11:40.997 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:40.997 "is_configured": true, 00:11:40.997 "data_offset": 2048, 00:11:40.997 "data_size": 63488 00:11:40.997 }, 00:11:40.997 { 00:11:40.997 "name": "pt2", 00:11:40.997 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:40.997 "is_configured": true, 00:11:40.997 "data_offset": 2048, 00:11:40.997 "data_size": 63488 00:11:40.997 }, 00:11:40.997 { 00:11:40.997 "name": "pt3", 00:11:40.997 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:40.997 "is_configured": true, 00:11:40.997 "data_offset": 2048, 00:11:40.997 "data_size": 63488 00:11:40.997 } 00:11:40.997 ] 00:11:40.997 }' 00:11:40.997 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.997 18:10:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.255 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:41.255 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:41.255 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:41.255 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:41.255 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:41.255 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:41.255 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:41.255 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:41.255 18:10:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.255 18:10:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.255 [2024-12-06 18:10:06.759028] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:41.513 18:10:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.513 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:41.513 "name": "raid_bdev1", 00:11:41.513 "aliases": [ 00:11:41.513 "ec51a2e6-6990-4fa1-aa3d-e0542d1f7eb2" 00:11:41.513 ], 00:11:41.513 "product_name": "Raid Volume", 00:11:41.513 "block_size": 512, 00:11:41.513 "num_blocks": 190464, 00:11:41.513 "uuid": "ec51a2e6-6990-4fa1-aa3d-e0542d1f7eb2", 00:11:41.513 "assigned_rate_limits": { 00:11:41.513 "rw_ios_per_sec": 0, 00:11:41.513 "rw_mbytes_per_sec": 0, 00:11:41.513 "r_mbytes_per_sec": 0, 00:11:41.513 "w_mbytes_per_sec": 0 00:11:41.513 }, 00:11:41.513 "claimed": false, 00:11:41.513 "zoned": false, 00:11:41.513 "supported_io_types": { 00:11:41.513 "read": true, 00:11:41.513 "write": true, 00:11:41.513 "unmap": true, 00:11:41.513 "flush": true, 00:11:41.513 "reset": true, 00:11:41.513 "nvme_admin": false, 00:11:41.513 "nvme_io": false, 00:11:41.513 "nvme_io_md": false, 00:11:41.513 "write_zeroes": true, 00:11:41.513 "zcopy": false, 00:11:41.513 "get_zone_info": false, 00:11:41.513 "zone_management": false, 00:11:41.513 "zone_append": false, 00:11:41.513 "compare": false, 00:11:41.513 "compare_and_write": false, 00:11:41.513 "abort": false, 00:11:41.513 "seek_hole": false, 00:11:41.513 "seek_data": false, 00:11:41.513 "copy": false, 00:11:41.513 "nvme_iov_md": false 00:11:41.513 }, 00:11:41.513 "memory_domains": [ 00:11:41.513 { 00:11:41.513 "dma_device_id": "system", 00:11:41.513 "dma_device_type": 1 00:11:41.513 }, 00:11:41.513 { 00:11:41.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.513 "dma_device_type": 2 00:11:41.513 }, 00:11:41.513 { 00:11:41.513 "dma_device_id": "system", 00:11:41.513 "dma_device_type": 1 00:11:41.513 }, 00:11:41.513 { 00:11:41.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.513 "dma_device_type": 2 00:11:41.513 }, 00:11:41.513 { 00:11:41.513 "dma_device_id": "system", 00:11:41.513 "dma_device_type": 1 00:11:41.513 }, 00:11:41.513 { 00:11:41.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.513 "dma_device_type": 2 00:11:41.513 } 00:11:41.513 ], 00:11:41.513 "driver_specific": { 00:11:41.513 "raid": { 00:11:41.513 "uuid": "ec51a2e6-6990-4fa1-aa3d-e0542d1f7eb2", 00:11:41.513 "strip_size_kb": 64, 00:11:41.513 "state": "online", 00:11:41.513 "raid_level": "raid0", 00:11:41.513 "superblock": true, 00:11:41.513 "num_base_bdevs": 3, 00:11:41.513 "num_base_bdevs_discovered": 3, 00:11:41.513 "num_base_bdevs_operational": 3, 00:11:41.513 "base_bdevs_list": [ 00:11:41.513 { 00:11:41.513 "name": "pt1", 00:11:41.513 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:41.513 "is_configured": true, 00:11:41.513 "data_offset": 2048, 00:11:41.513 "data_size": 63488 00:11:41.513 }, 00:11:41.513 { 00:11:41.513 "name": "pt2", 00:11:41.513 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:41.513 "is_configured": true, 00:11:41.513 "data_offset": 2048, 00:11:41.513 "data_size": 63488 00:11:41.513 }, 00:11:41.513 { 00:11:41.513 "name": "pt3", 00:11:41.513 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:41.513 "is_configured": true, 00:11:41.513 "data_offset": 2048, 00:11:41.513 "data_size": 63488 00:11:41.513 } 00:11:41.513 ] 00:11:41.513 } 00:11:41.513 } 00:11:41.513 }' 00:11:41.513 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:41.513 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:41.513 pt2 00:11:41.513 pt3' 00:11:41.513 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.513 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:41.513 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.513 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:41.513 18:10:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.513 18:10:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.513 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.513 18:10:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.513 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.513 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.513 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.513 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:41.513 18:10:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.513 18:10:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.513 18:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.513 18:10:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.514 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.514 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.514 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.514 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.514 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:41.514 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.514 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:41.773 [2024-12-06 18:10:07.075045] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ec51a2e6-6990-4fa1-aa3d-e0542d1f7eb2 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ec51a2e6-6990-4fa1-aa3d-e0542d1f7eb2 ']' 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.773 [2024-12-06 18:10:07.118724] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:41.773 [2024-12-06 18:10:07.118763] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:41.773 [2024-12-06 18:10:07.118864] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:41.773 [2024-12-06 18:10:07.118948] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:41.773 [2024-12-06 18:10:07.118969] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.773 [2024-12-06 18:10:07.274877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:41.773 [2024-12-06 18:10:07.277292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:41.773 [2024-12-06 18:10:07.277367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:41.773 [2024-12-06 18:10:07.277442] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:41.773 [2024-12-06 18:10:07.277513] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:41.773 [2024-12-06 18:10:07.277547] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:41.773 [2024-12-06 18:10:07.277575] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:41.773 [2024-12-06 18:10:07.277592] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:41.773 request: 00:11:41.773 { 00:11:41.773 "name": "raid_bdev1", 00:11:41.773 "raid_level": "raid0", 00:11:41.773 "base_bdevs": [ 00:11:41.773 "malloc1", 00:11:41.773 "malloc2", 00:11:41.773 "malloc3" 00:11:41.773 ], 00:11:41.773 "strip_size_kb": 64, 00:11:41.773 "superblock": false, 00:11:41.773 "method": "bdev_raid_create", 00:11:41.773 "req_id": 1 00:11:41.773 } 00:11:41.773 Got JSON-RPC error response 00:11:41.773 response: 00:11:41.773 { 00:11:41.773 "code": -17, 00:11:41.773 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:41.773 } 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.773 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:42.033 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.033 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:42.033 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:42.033 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:42.033 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.033 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.033 [2024-12-06 18:10:07.338792] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:42.033 [2024-12-06 18:10:07.338860] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.033 [2024-12-06 18:10:07.338888] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:42.033 [2024-12-06 18:10:07.338902] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.033 [2024-12-06 18:10:07.341722] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.033 [2024-12-06 18:10:07.341779] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:42.033 [2024-12-06 18:10:07.341880] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:42.033 [2024-12-06 18:10:07.341945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:42.033 pt1 00:11:42.033 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.033 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:11:42.033 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:42.033 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:42.033 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:42.033 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:42.033 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:42.033 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.033 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.033 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.033 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.033 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.033 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.033 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.033 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.033 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.033 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.033 "name": "raid_bdev1", 00:11:42.033 "uuid": "ec51a2e6-6990-4fa1-aa3d-e0542d1f7eb2", 00:11:42.033 "strip_size_kb": 64, 00:11:42.033 "state": "configuring", 00:11:42.033 "raid_level": "raid0", 00:11:42.033 "superblock": true, 00:11:42.033 "num_base_bdevs": 3, 00:11:42.033 "num_base_bdevs_discovered": 1, 00:11:42.033 "num_base_bdevs_operational": 3, 00:11:42.033 "base_bdevs_list": [ 00:11:42.033 { 00:11:42.033 "name": "pt1", 00:11:42.033 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:42.033 "is_configured": true, 00:11:42.033 "data_offset": 2048, 00:11:42.033 "data_size": 63488 00:11:42.033 }, 00:11:42.033 { 00:11:42.033 "name": null, 00:11:42.033 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:42.033 "is_configured": false, 00:11:42.033 "data_offset": 2048, 00:11:42.033 "data_size": 63488 00:11:42.033 }, 00:11:42.033 { 00:11:42.033 "name": null, 00:11:42.033 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:42.033 "is_configured": false, 00:11:42.033 "data_offset": 2048, 00:11:42.033 "data_size": 63488 00:11:42.033 } 00:11:42.033 ] 00:11:42.033 }' 00:11:42.033 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.033 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.600 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:11:42.600 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:42.600 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.600 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.601 [2024-12-06 18:10:07.854996] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:42.601 [2024-12-06 18:10:07.855080] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.601 [2024-12-06 18:10:07.855117] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:42.601 [2024-12-06 18:10:07.855132] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.601 [2024-12-06 18:10:07.855669] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.601 [2024-12-06 18:10:07.855708] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:42.601 [2024-12-06 18:10:07.855832] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:42.601 [2024-12-06 18:10:07.855875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:42.601 pt2 00:11:42.601 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.601 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:42.601 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.601 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.601 [2024-12-06 18:10:07.862977] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:42.601 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.601 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:11:42.601 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:42.601 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:42.601 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:42.601 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:42.601 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:42.601 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.601 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.601 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.601 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.601 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.601 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.601 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.601 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.601 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.601 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.601 "name": "raid_bdev1", 00:11:42.601 "uuid": "ec51a2e6-6990-4fa1-aa3d-e0542d1f7eb2", 00:11:42.601 "strip_size_kb": 64, 00:11:42.601 "state": "configuring", 00:11:42.601 "raid_level": "raid0", 00:11:42.601 "superblock": true, 00:11:42.601 "num_base_bdevs": 3, 00:11:42.601 "num_base_bdevs_discovered": 1, 00:11:42.601 "num_base_bdevs_operational": 3, 00:11:42.601 "base_bdevs_list": [ 00:11:42.601 { 00:11:42.601 "name": "pt1", 00:11:42.601 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:42.601 "is_configured": true, 00:11:42.601 "data_offset": 2048, 00:11:42.601 "data_size": 63488 00:11:42.601 }, 00:11:42.601 { 00:11:42.601 "name": null, 00:11:42.601 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:42.601 "is_configured": false, 00:11:42.601 "data_offset": 0, 00:11:42.601 "data_size": 63488 00:11:42.601 }, 00:11:42.601 { 00:11:42.601 "name": null, 00:11:42.601 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:42.601 "is_configured": false, 00:11:42.601 "data_offset": 2048, 00:11:42.601 "data_size": 63488 00:11:42.601 } 00:11:42.601 ] 00:11:42.601 }' 00:11:42.601 18:10:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.601 18:10:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.859 18:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:42.859 18:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:42.859 18:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:42.859 18:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.859 18:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.859 [2024-12-06 18:10:08.375123] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:42.859 [2024-12-06 18:10:08.375235] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.859 [2024-12-06 18:10:08.375261] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:42.859 [2024-12-06 18:10:08.375277] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.859 [2024-12-06 18:10:08.375847] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.859 [2024-12-06 18:10:08.375888] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:42.859 [2024-12-06 18:10:08.375985] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:42.859 [2024-12-06 18:10:08.376022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:43.117 pt2 00:11:43.117 18:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.117 18:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:43.117 18:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:43.117 18:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:43.117 18:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.117 18:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.117 [2024-12-06 18:10:08.383086] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:43.117 [2024-12-06 18:10:08.383142] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:43.117 [2024-12-06 18:10:08.383161] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:43.117 [2024-12-06 18:10:08.383176] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:43.117 [2024-12-06 18:10:08.383602] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:43.117 [2024-12-06 18:10:08.383650] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:43.117 [2024-12-06 18:10:08.383724] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:43.117 [2024-12-06 18:10:08.383756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:43.117 [2024-12-06 18:10:08.383917] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:43.117 [2024-12-06 18:10:08.383938] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:43.117 [2024-12-06 18:10:08.384240] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:43.117 [2024-12-06 18:10:08.384437] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:43.117 [2024-12-06 18:10:08.384464] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:43.117 [2024-12-06 18:10:08.384641] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:43.117 pt3 00:11:43.117 18:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.117 18:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:43.117 18:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:43.117 18:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:43.117 18:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:43.117 18:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:43.117 18:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:43.118 18:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:43.118 18:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:43.118 18:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.118 18:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.118 18:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.118 18:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.118 18:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.118 18:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.118 18:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.118 18:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:43.118 18:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.118 18:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.118 "name": "raid_bdev1", 00:11:43.118 "uuid": "ec51a2e6-6990-4fa1-aa3d-e0542d1f7eb2", 00:11:43.118 "strip_size_kb": 64, 00:11:43.118 "state": "online", 00:11:43.118 "raid_level": "raid0", 00:11:43.118 "superblock": true, 00:11:43.118 "num_base_bdevs": 3, 00:11:43.118 "num_base_bdevs_discovered": 3, 00:11:43.118 "num_base_bdevs_operational": 3, 00:11:43.118 "base_bdevs_list": [ 00:11:43.118 { 00:11:43.118 "name": "pt1", 00:11:43.118 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:43.118 "is_configured": true, 00:11:43.118 "data_offset": 2048, 00:11:43.118 "data_size": 63488 00:11:43.118 }, 00:11:43.118 { 00:11:43.118 "name": "pt2", 00:11:43.118 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:43.118 "is_configured": true, 00:11:43.118 "data_offset": 2048, 00:11:43.118 "data_size": 63488 00:11:43.118 }, 00:11:43.118 { 00:11:43.118 "name": "pt3", 00:11:43.118 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:43.118 "is_configured": true, 00:11:43.118 "data_offset": 2048, 00:11:43.118 "data_size": 63488 00:11:43.118 } 00:11:43.118 ] 00:11:43.118 }' 00:11:43.118 18:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.118 18:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.376 18:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:43.376 18:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:43.376 18:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:43.376 18:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:43.376 18:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:43.376 18:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:43.376 18:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:43.376 18:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.376 18:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.376 18:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:43.376 [2024-12-06 18:10:08.883761] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:43.641 18:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.641 18:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:43.641 "name": "raid_bdev1", 00:11:43.641 "aliases": [ 00:11:43.641 "ec51a2e6-6990-4fa1-aa3d-e0542d1f7eb2" 00:11:43.641 ], 00:11:43.641 "product_name": "Raid Volume", 00:11:43.641 "block_size": 512, 00:11:43.641 "num_blocks": 190464, 00:11:43.641 "uuid": "ec51a2e6-6990-4fa1-aa3d-e0542d1f7eb2", 00:11:43.641 "assigned_rate_limits": { 00:11:43.641 "rw_ios_per_sec": 0, 00:11:43.641 "rw_mbytes_per_sec": 0, 00:11:43.641 "r_mbytes_per_sec": 0, 00:11:43.641 "w_mbytes_per_sec": 0 00:11:43.641 }, 00:11:43.641 "claimed": false, 00:11:43.641 "zoned": false, 00:11:43.641 "supported_io_types": { 00:11:43.641 "read": true, 00:11:43.641 "write": true, 00:11:43.641 "unmap": true, 00:11:43.641 "flush": true, 00:11:43.641 "reset": true, 00:11:43.641 "nvme_admin": false, 00:11:43.641 "nvme_io": false, 00:11:43.641 "nvme_io_md": false, 00:11:43.641 "write_zeroes": true, 00:11:43.641 "zcopy": false, 00:11:43.641 "get_zone_info": false, 00:11:43.641 "zone_management": false, 00:11:43.641 "zone_append": false, 00:11:43.641 "compare": false, 00:11:43.641 "compare_and_write": false, 00:11:43.641 "abort": false, 00:11:43.641 "seek_hole": false, 00:11:43.641 "seek_data": false, 00:11:43.641 "copy": false, 00:11:43.641 "nvme_iov_md": false 00:11:43.641 }, 00:11:43.641 "memory_domains": [ 00:11:43.641 { 00:11:43.641 "dma_device_id": "system", 00:11:43.641 "dma_device_type": 1 00:11:43.641 }, 00:11:43.641 { 00:11:43.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.641 "dma_device_type": 2 00:11:43.641 }, 00:11:43.641 { 00:11:43.641 "dma_device_id": "system", 00:11:43.641 "dma_device_type": 1 00:11:43.641 }, 00:11:43.641 { 00:11:43.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.641 "dma_device_type": 2 00:11:43.641 }, 00:11:43.641 { 00:11:43.641 "dma_device_id": "system", 00:11:43.641 "dma_device_type": 1 00:11:43.641 }, 00:11:43.641 { 00:11:43.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.641 "dma_device_type": 2 00:11:43.641 } 00:11:43.641 ], 00:11:43.641 "driver_specific": { 00:11:43.641 "raid": { 00:11:43.641 "uuid": "ec51a2e6-6990-4fa1-aa3d-e0542d1f7eb2", 00:11:43.641 "strip_size_kb": 64, 00:11:43.641 "state": "online", 00:11:43.641 "raid_level": "raid0", 00:11:43.641 "superblock": true, 00:11:43.641 "num_base_bdevs": 3, 00:11:43.641 "num_base_bdevs_discovered": 3, 00:11:43.641 "num_base_bdevs_operational": 3, 00:11:43.641 "base_bdevs_list": [ 00:11:43.641 { 00:11:43.641 "name": "pt1", 00:11:43.641 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:43.641 "is_configured": true, 00:11:43.641 "data_offset": 2048, 00:11:43.641 "data_size": 63488 00:11:43.641 }, 00:11:43.641 { 00:11:43.641 "name": "pt2", 00:11:43.641 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:43.641 "is_configured": true, 00:11:43.641 "data_offset": 2048, 00:11:43.641 "data_size": 63488 00:11:43.641 }, 00:11:43.641 { 00:11:43.641 "name": "pt3", 00:11:43.641 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:43.641 "is_configured": true, 00:11:43.641 "data_offset": 2048, 00:11:43.641 "data_size": 63488 00:11:43.641 } 00:11:43.641 ] 00:11:43.641 } 00:11:43.641 } 00:11:43.641 }' 00:11:43.641 18:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:43.641 18:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:43.641 pt2 00:11:43.641 pt3' 00:11:43.641 18:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.641 18:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:43.641 18:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:43.641 18:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:43.641 18:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.641 18:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.641 18:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.641 18:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.641 18:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.641 18:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.641 18:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:43.641 18:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:43.641 18:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.641 18:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.641 18:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.641 18:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.641 18:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.641 18:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.641 18:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:43.641 18:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:43.641 18:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.641 18:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.641 18:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.641 18:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.929 18:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.929 18:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.929 18:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:43.929 18:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:43.929 18:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.929 18:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.929 [2024-12-06 18:10:09.199860] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:43.929 18:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.929 18:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ec51a2e6-6990-4fa1-aa3d-e0542d1f7eb2 '!=' ec51a2e6-6990-4fa1-aa3d-e0542d1f7eb2 ']' 00:11:43.929 18:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:11:43.929 18:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:43.929 18:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:43.929 18:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65142 00:11:43.929 18:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65142 ']' 00:11:43.929 18:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65142 00:11:43.929 18:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:43.929 18:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:43.929 18:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65142 00:11:43.929 18:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:43.929 18:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:43.929 killing process with pid 65142 00:11:43.929 18:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65142' 00:11:43.929 18:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65142 00:11:43.929 [2024-12-06 18:10:09.280598] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:43.929 18:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65142 00:11:43.929 [2024-12-06 18:10:09.280712] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:43.929 [2024-12-06 18:10:09.280809] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:43.929 [2024-12-06 18:10:09.280831] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:44.187 [2024-12-06 18:10:09.545299] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:45.121 18:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:45.121 00:11:45.121 real 0m5.667s 00:11:45.121 user 0m8.583s 00:11:45.121 sys 0m0.814s 00:11:45.121 18:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:45.121 18:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.121 ************************************ 00:11:45.121 END TEST raid_superblock_test 00:11:45.121 ************************************ 00:11:45.121 18:10:10 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:11:45.121 18:10:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:45.121 18:10:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:45.121 18:10:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:45.121 ************************************ 00:11:45.121 START TEST raid_read_error_test 00:11:45.121 ************************************ 00:11:45.121 18:10:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:11:45.121 18:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:45.121 18:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:45.121 18:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:45.121 18:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:45.121 18:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:45.121 18:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:45.121 18:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:45.121 18:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:45.121 18:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:45.121 18:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:45.121 18:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:45.121 18:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:45.121 18:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:45.121 18:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:45.121 18:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:45.121 18:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:45.121 18:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:45.121 18:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:45.121 18:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:45.121 18:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:45.121 18:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:45.121 18:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:45.121 18:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:45.121 18:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:45.121 18:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:45.121 18:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.UcDq8LBK6H 00:11:45.122 18:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65405 00:11:45.122 18:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:45.122 18:10:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65405 00:11:45.122 18:10:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65405 ']' 00:11:45.122 18:10:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:45.122 18:10:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:45.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:45.122 18:10:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:45.122 18:10:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:45.122 18:10:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.380 [2024-12-06 18:10:10.738103] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:11:45.380 [2024-12-06 18:10:10.738277] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65405 ] 00:11:45.639 [2024-12-06 18:10:10.926517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.639 [2024-12-06 18:10:11.085865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.898 [2024-12-06 18:10:11.295198] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:45.898 [2024-12-06 18:10:11.295245] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:46.465 18:10:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:46.465 18:10:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:46.465 18:10:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:46.465 18:10:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:46.465 18:10:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.465 18:10:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.465 BaseBdev1_malloc 00:11:46.465 18:10:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.465 18:10:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:46.465 18:10:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.465 18:10:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.465 true 00:11:46.465 18:10:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.466 [2024-12-06 18:10:11.767883] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:46.466 [2024-12-06 18:10:11.767953] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.466 [2024-12-06 18:10:11.767982] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:46.466 [2024-12-06 18:10:11.767999] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.466 [2024-12-06 18:10:11.770723] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.466 [2024-12-06 18:10:11.770788] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:46.466 BaseBdev1 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.466 BaseBdev2_malloc 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.466 true 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.466 [2024-12-06 18:10:11.823448] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:46.466 [2024-12-06 18:10:11.823513] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.466 [2024-12-06 18:10:11.823537] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:46.466 [2024-12-06 18:10:11.823555] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.466 [2024-12-06 18:10:11.826260] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.466 [2024-12-06 18:10:11.826307] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:46.466 BaseBdev2 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.466 BaseBdev3_malloc 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.466 true 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.466 [2024-12-06 18:10:11.894663] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:46.466 [2024-12-06 18:10:11.894737] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.466 [2024-12-06 18:10:11.894763] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:46.466 [2024-12-06 18:10:11.894799] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.466 [2024-12-06 18:10:11.897517] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.466 [2024-12-06 18:10:11.897565] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:46.466 BaseBdev3 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.466 [2024-12-06 18:10:11.902789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:46.466 [2024-12-06 18:10:11.905193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:46.466 [2024-12-06 18:10:11.905302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:46.466 [2024-12-06 18:10:11.905566] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:46.466 [2024-12-06 18:10:11.905597] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:46.466 [2024-12-06 18:10:11.905925] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:46.466 [2024-12-06 18:10:11.906150] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:46.466 [2024-12-06 18:10:11.906182] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:46.466 [2024-12-06 18:10:11.906361] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.466 "name": "raid_bdev1", 00:11:46.466 "uuid": "f5a9a3ac-e99b-4b50-9b51-fc948e976293", 00:11:46.466 "strip_size_kb": 64, 00:11:46.466 "state": "online", 00:11:46.466 "raid_level": "raid0", 00:11:46.466 "superblock": true, 00:11:46.466 "num_base_bdevs": 3, 00:11:46.466 "num_base_bdevs_discovered": 3, 00:11:46.466 "num_base_bdevs_operational": 3, 00:11:46.466 "base_bdevs_list": [ 00:11:46.466 { 00:11:46.466 "name": "BaseBdev1", 00:11:46.466 "uuid": "b8b2bde3-a63c-5d1c-9a62-44b9304fff04", 00:11:46.466 "is_configured": true, 00:11:46.466 "data_offset": 2048, 00:11:46.466 "data_size": 63488 00:11:46.466 }, 00:11:46.466 { 00:11:46.466 "name": "BaseBdev2", 00:11:46.466 "uuid": "722065d7-4af5-5bf4-ae28-f9d58255a381", 00:11:46.466 "is_configured": true, 00:11:46.466 "data_offset": 2048, 00:11:46.466 "data_size": 63488 00:11:46.466 }, 00:11:46.466 { 00:11:46.466 "name": "BaseBdev3", 00:11:46.466 "uuid": "2d684924-ab33-533f-b914-3cd34d7c0b37", 00:11:46.466 "is_configured": true, 00:11:46.466 "data_offset": 2048, 00:11:46.466 "data_size": 63488 00:11:46.466 } 00:11:46.466 ] 00:11:46.466 }' 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.466 18:10:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.033 18:10:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:47.033 18:10:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:47.033 [2024-12-06 18:10:12.540303] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:47.968 18:10:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:47.968 18:10:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.968 18:10:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.968 18:10:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.968 18:10:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:47.968 18:10:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:47.968 18:10:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:47.968 18:10:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:47.968 18:10:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:47.969 18:10:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:47.969 18:10:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:47.969 18:10:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:47.969 18:10:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:47.969 18:10:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.969 18:10:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.969 18:10:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.969 18:10:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.969 18:10:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.969 18:10:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.969 18:10:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.969 18:10:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.969 18:10:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.227 18:10:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.227 "name": "raid_bdev1", 00:11:48.227 "uuid": "f5a9a3ac-e99b-4b50-9b51-fc948e976293", 00:11:48.227 "strip_size_kb": 64, 00:11:48.227 "state": "online", 00:11:48.227 "raid_level": "raid0", 00:11:48.227 "superblock": true, 00:11:48.227 "num_base_bdevs": 3, 00:11:48.227 "num_base_bdevs_discovered": 3, 00:11:48.227 "num_base_bdevs_operational": 3, 00:11:48.227 "base_bdevs_list": [ 00:11:48.227 { 00:11:48.227 "name": "BaseBdev1", 00:11:48.227 "uuid": "b8b2bde3-a63c-5d1c-9a62-44b9304fff04", 00:11:48.227 "is_configured": true, 00:11:48.227 "data_offset": 2048, 00:11:48.227 "data_size": 63488 00:11:48.227 }, 00:11:48.227 { 00:11:48.227 "name": "BaseBdev2", 00:11:48.227 "uuid": "722065d7-4af5-5bf4-ae28-f9d58255a381", 00:11:48.227 "is_configured": true, 00:11:48.227 "data_offset": 2048, 00:11:48.227 "data_size": 63488 00:11:48.227 }, 00:11:48.227 { 00:11:48.227 "name": "BaseBdev3", 00:11:48.227 "uuid": "2d684924-ab33-533f-b914-3cd34d7c0b37", 00:11:48.227 "is_configured": true, 00:11:48.227 "data_offset": 2048, 00:11:48.227 "data_size": 63488 00:11:48.227 } 00:11:48.227 ] 00:11:48.227 }' 00:11:48.227 18:10:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.227 18:10:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.537 18:10:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:48.537 18:10:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.537 18:10:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.537 [2024-12-06 18:10:13.976485] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:48.537 [2024-12-06 18:10:13.976526] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:48.537 [2024-12-06 18:10:13.979928] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:48.537 [2024-12-06 18:10:13.979989] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:48.537 [2024-12-06 18:10:13.980043] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:48.537 [2024-12-06 18:10:13.980057] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:48.537 { 00:11:48.537 "results": [ 00:11:48.537 { 00:11:48.537 "job": "raid_bdev1", 00:11:48.537 "core_mask": "0x1", 00:11:48.537 "workload": "randrw", 00:11:48.537 "percentage": 50, 00:11:48.537 "status": "finished", 00:11:48.537 "queue_depth": 1, 00:11:48.537 "io_size": 131072, 00:11:48.537 "runtime": 1.433877, 00:11:48.537 "iops": 10081.757361335734, 00:11:48.537 "mibps": 1260.2196701669668, 00:11:48.537 "io_failed": 1, 00:11:48.537 "io_timeout": 0, 00:11:48.537 "avg_latency_us": 138.0856533796148, 00:11:48.537 "min_latency_us": 40.96, 00:11:48.537 "max_latency_us": 1839.4763636363637 00:11:48.537 } 00:11:48.537 ], 00:11:48.537 "core_count": 1 00:11:48.537 } 00:11:48.537 18:10:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.537 18:10:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65405 00:11:48.537 18:10:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65405 ']' 00:11:48.537 18:10:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65405 00:11:48.537 18:10:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:48.537 18:10:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:48.537 18:10:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65405 00:11:48.537 18:10:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:48.537 18:10:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:48.537 killing process with pid 65405 00:11:48.537 18:10:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65405' 00:11:48.537 18:10:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65405 00:11:48.537 [2024-12-06 18:10:14.014565] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:48.537 18:10:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65405 00:11:48.795 [2024-12-06 18:10:14.219669] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:50.167 18:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.UcDq8LBK6H 00:11:50.167 18:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:50.167 18:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:50.167 18:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:11:50.167 18:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:50.167 18:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:50.167 18:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:50.167 18:10:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:11:50.167 00:11:50.167 real 0m4.697s 00:11:50.167 user 0m5.861s 00:11:50.167 sys 0m0.560s 00:11:50.167 18:10:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:50.167 ************************************ 00:11:50.167 END TEST raid_read_error_test 00:11:50.168 ************************************ 00:11:50.168 18:10:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.168 18:10:15 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:11:50.168 18:10:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:50.168 18:10:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:50.168 18:10:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:50.168 ************************************ 00:11:50.168 START TEST raid_write_error_test 00:11:50.168 ************************************ 00:11:50.168 18:10:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:11:50.168 18:10:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:50.168 18:10:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:50.168 18:10:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:50.168 18:10:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:50.168 18:10:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:50.168 18:10:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:50.168 18:10:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:50.168 18:10:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:50.168 18:10:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:50.168 18:10:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:50.168 18:10:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:50.168 18:10:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:50.168 18:10:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:50.168 18:10:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:50.168 18:10:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:50.168 18:10:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:50.168 18:10:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:50.168 18:10:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:50.168 18:10:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:50.168 18:10:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:50.168 18:10:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:50.168 18:10:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:50.168 18:10:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:50.168 18:10:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:50.168 18:10:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:50.168 18:10:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.e5vhKVD3bl 00:11:50.168 18:10:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65546 00:11:50.168 18:10:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:50.168 18:10:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65546 00:11:50.168 18:10:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65546 ']' 00:11:50.168 18:10:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.168 18:10:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:50.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.168 18:10:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.168 18:10:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:50.168 18:10:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.168 [2024-12-06 18:10:15.516239] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:11:50.168 [2024-12-06 18:10:15.516682] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65546 ] 00:11:50.425 [2024-12-06 18:10:15.707391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.425 [2024-12-06 18:10:15.835265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.682 [2024-12-06 18:10:16.029634] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:50.682 [2024-12-06 18:10:16.029677] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.247 BaseBdev1_malloc 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.247 true 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.247 [2024-12-06 18:10:16.586597] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:51.247 [2024-12-06 18:10:16.586664] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.247 [2024-12-06 18:10:16.586720] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:51.247 [2024-12-06 18:10:16.586742] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.247 [2024-12-06 18:10:16.589517] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.247 [2024-12-06 18:10:16.589565] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:51.247 BaseBdev1 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.247 BaseBdev2_malloc 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.247 true 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.247 [2024-12-06 18:10:16.641103] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:51.247 [2024-12-06 18:10:16.641316] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.247 [2024-12-06 18:10:16.641362] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:51.247 [2024-12-06 18:10:16.641382] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.247 [2024-12-06 18:10:16.644203] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.247 [2024-12-06 18:10:16.644253] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:51.247 BaseBdev2 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.247 BaseBdev3_malloc 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.247 true 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.247 [2024-12-06 18:10:16.707921] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:51.247 [2024-12-06 18:10:16.708134] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.247 [2024-12-06 18:10:16.708176] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:51.247 [2024-12-06 18:10:16.708197] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.247 [2024-12-06 18:10:16.711169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.247 BaseBdev3 00:11:51.247 [2024-12-06 18:10:16.711365] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.247 [2024-12-06 18:10:16.716099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:51.247 [2024-12-06 18:10:16.718523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:51.247 [2024-12-06 18:10:16.718812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:51.247 [2024-12-06 18:10:16.719099] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:51.247 [2024-12-06 18:10:16.719137] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:51.247 [2024-12-06 18:10:16.719470] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:51.247 [2024-12-06 18:10:16.719704] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:51.247 [2024-12-06 18:10:16.719727] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:51.247 [2024-12-06 18:10:16.719995] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.247 18:10:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.504 18:10:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.504 "name": "raid_bdev1", 00:11:51.504 "uuid": "aa9df8a1-f1a6-425a-88c8-394c6820ab23", 00:11:51.504 "strip_size_kb": 64, 00:11:51.504 "state": "online", 00:11:51.504 "raid_level": "raid0", 00:11:51.504 "superblock": true, 00:11:51.504 "num_base_bdevs": 3, 00:11:51.504 "num_base_bdevs_discovered": 3, 00:11:51.504 "num_base_bdevs_operational": 3, 00:11:51.504 "base_bdevs_list": [ 00:11:51.504 { 00:11:51.504 "name": "BaseBdev1", 00:11:51.504 "uuid": "a337f540-cb7e-501a-9c8a-752a6eabde06", 00:11:51.504 "is_configured": true, 00:11:51.504 "data_offset": 2048, 00:11:51.504 "data_size": 63488 00:11:51.504 }, 00:11:51.504 { 00:11:51.504 "name": "BaseBdev2", 00:11:51.504 "uuid": "19b95871-ec5d-5f28-8657-32bc0bc06922", 00:11:51.504 "is_configured": true, 00:11:51.504 "data_offset": 2048, 00:11:51.504 "data_size": 63488 00:11:51.504 }, 00:11:51.504 { 00:11:51.504 "name": "BaseBdev3", 00:11:51.504 "uuid": "0630f957-c191-5b4f-8256-b548e798bffe", 00:11:51.504 "is_configured": true, 00:11:51.504 "data_offset": 2048, 00:11:51.504 "data_size": 63488 00:11:51.504 } 00:11:51.504 ] 00:11:51.504 }' 00:11:51.504 18:10:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.504 18:10:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.761 18:10:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:51.761 18:10:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:52.019 [2024-12-06 18:10:17.301509] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:52.952 18:10:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:52.952 18:10:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.952 18:10:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.952 18:10:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.952 18:10:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:52.952 18:10:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:52.952 18:10:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:52.952 18:10:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:52.952 18:10:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:52.952 18:10:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:52.952 18:10:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:52.952 18:10:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:52.952 18:10:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:52.952 18:10:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.952 18:10:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.952 18:10:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.952 18:10:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.952 18:10:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.952 18:10:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.952 18:10:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.952 18:10:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.952 18:10:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.952 18:10:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.952 "name": "raid_bdev1", 00:11:52.952 "uuid": "aa9df8a1-f1a6-425a-88c8-394c6820ab23", 00:11:52.952 "strip_size_kb": 64, 00:11:52.952 "state": "online", 00:11:52.952 "raid_level": "raid0", 00:11:52.952 "superblock": true, 00:11:52.952 "num_base_bdevs": 3, 00:11:52.952 "num_base_bdevs_discovered": 3, 00:11:52.952 "num_base_bdevs_operational": 3, 00:11:52.952 "base_bdevs_list": [ 00:11:52.952 { 00:11:52.952 "name": "BaseBdev1", 00:11:52.952 "uuid": "a337f540-cb7e-501a-9c8a-752a6eabde06", 00:11:52.952 "is_configured": true, 00:11:52.952 "data_offset": 2048, 00:11:52.952 "data_size": 63488 00:11:52.952 }, 00:11:52.952 { 00:11:52.952 "name": "BaseBdev2", 00:11:52.952 "uuid": "19b95871-ec5d-5f28-8657-32bc0bc06922", 00:11:52.952 "is_configured": true, 00:11:52.952 "data_offset": 2048, 00:11:52.952 "data_size": 63488 00:11:52.952 }, 00:11:52.952 { 00:11:52.952 "name": "BaseBdev3", 00:11:52.952 "uuid": "0630f957-c191-5b4f-8256-b548e798bffe", 00:11:52.952 "is_configured": true, 00:11:52.952 "data_offset": 2048, 00:11:52.952 "data_size": 63488 00:11:52.952 } 00:11:52.952 ] 00:11:52.952 }' 00:11:52.952 18:10:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.952 18:10:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.268 18:10:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:53.268 18:10:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.268 18:10:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.268 [2024-12-06 18:10:18.712719] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:53.268 [2024-12-06 18:10:18.712759] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:53.268 [2024-12-06 18:10:18.716256] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:53.268 [2024-12-06 18:10:18.716319] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:53.268 [2024-12-06 18:10:18.716387] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:53.268 [2024-12-06 18:10:18.716402] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:53.268 { 00:11:53.268 "results": [ 00:11:53.268 { 00:11:53.268 "job": "raid_bdev1", 00:11:53.268 "core_mask": "0x1", 00:11:53.268 "workload": "randrw", 00:11:53.268 "percentage": 50, 00:11:53.268 "status": "finished", 00:11:53.268 "queue_depth": 1, 00:11:53.268 "io_size": 131072, 00:11:53.268 "runtime": 1.408906, 00:11:53.268 "iops": 10672.110133678187, 00:11:53.268 "mibps": 1334.0137667097733, 00:11:53.268 "io_failed": 1, 00:11:53.268 "io_timeout": 0, 00:11:53.268 "avg_latency_us": 129.9697043051382, 00:11:53.268 "min_latency_us": 28.85818181818182, 00:11:53.268 "max_latency_us": 1861.8181818181818 00:11:53.268 } 00:11:53.268 ], 00:11:53.268 "core_count": 1 00:11:53.268 } 00:11:53.268 18:10:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.268 18:10:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65546 00:11:53.268 18:10:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65546 ']' 00:11:53.268 18:10:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65546 00:11:53.268 18:10:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:53.268 18:10:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:53.268 18:10:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65546 00:11:53.268 18:10:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:53.268 18:10:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:53.268 killing process with pid 65546 00:11:53.268 18:10:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65546' 00:11:53.268 18:10:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65546 00:11:53.268 [2024-12-06 18:10:18.759709] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:53.268 18:10:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65546 00:11:53.527 [2024-12-06 18:10:18.948584] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:54.905 18:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:54.905 18:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.e5vhKVD3bl 00:11:54.905 18:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:54.905 18:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:11:54.905 18:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:54.905 18:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:54.905 18:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:54.905 18:10:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:11:54.905 00:11:54.905 real 0m4.657s 00:11:54.905 user 0m5.733s 00:11:54.905 sys 0m0.614s 00:11:54.905 18:10:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:54.905 18:10:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.905 ************************************ 00:11:54.905 END TEST raid_write_error_test 00:11:54.905 ************************************ 00:11:54.905 18:10:20 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:54.905 18:10:20 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:11:54.905 18:10:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:54.905 18:10:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:54.905 18:10:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:54.905 ************************************ 00:11:54.905 START TEST raid_state_function_test 00:11:54.905 ************************************ 00:11:54.905 18:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:11:54.905 18:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:54.905 18:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:54.905 18:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:54.905 18:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:54.905 18:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:54.905 18:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:54.905 18:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:54.905 18:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:54.905 18:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:54.905 18:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:54.905 18:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:54.905 18:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:54.905 18:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:54.905 18:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:54.905 18:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:54.905 18:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:54.905 18:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:54.905 18:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:54.905 18:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:54.905 18:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:54.905 18:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:54.905 18:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:54.905 18:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:54.905 18:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:54.905 18:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:54.905 18:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:54.905 18:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65690 00:11:54.905 18:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65690' 00:11:54.905 Process raid pid: 65690 00:11:54.905 18:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65690 00:11:54.905 18:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65690 ']' 00:11:54.905 18:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:54.905 18:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.905 18:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:54.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.905 18:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.905 18:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:54.905 18:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.905 [2024-12-06 18:10:20.190267] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:11:54.905 [2024-12-06 18:10:20.190452] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:54.905 [2024-12-06 18:10:20.376219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.164 [2024-12-06 18:10:20.506436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.423 [2024-12-06 18:10:20.709589] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:55.423 [2024-12-06 18:10:20.709640] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:55.682 18:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:55.682 18:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:55.682 18:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:55.682 18:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.682 18:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.682 [2024-12-06 18:10:21.135744] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:55.682 [2024-12-06 18:10:21.135824] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:55.682 [2024-12-06 18:10:21.135842] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:55.682 [2024-12-06 18:10:21.135859] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:55.682 [2024-12-06 18:10:21.135869] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:55.682 [2024-12-06 18:10:21.135884] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:55.682 18:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.682 18:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:55.682 18:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.682 18:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.682 18:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:55.682 18:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:55.682 18:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:55.682 18:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.682 18:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.682 18:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.682 18:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.682 18:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.682 18:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.682 18:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.682 18:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.682 18:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.682 18:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.682 "name": "Existed_Raid", 00:11:55.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.682 "strip_size_kb": 64, 00:11:55.682 "state": "configuring", 00:11:55.682 "raid_level": "concat", 00:11:55.682 "superblock": false, 00:11:55.682 "num_base_bdevs": 3, 00:11:55.682 "num_base_bdevs_discovered": 0, 00:11:55.682 "num_base_bdevs_operational": 3, 00:11:55.682 "base_bdevs_list": [ 00:11:55.682 { 00:11:55.682 "name": "BaseBdev1", 00:11:55.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.682 "is_configured": false, 00:11:55.682 "data_offset": 0, 00:11:55.682 "data_size": 0 00:11:55.682 }, 00:11:55.682 { 00:11:55.682 "name": "BaseBdev2", 00:11:55.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.682 "is_configured": false, 00:11:55.682 "data_offset": 0, 00:11:55.682 "data_size": 0 00:11:55.682 }, 00:11:55.682 { 00:11:55.682 "name": "BaseBdev3", 00:11:55.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.682 "is_configured": false, 00:11:55.683 "data_offset": 0, 00:11:55.683 "data_size": 0 00:11:55.683 } 00:11:55.683 ] 00:11:55.683 }' 00:11:55.683 18:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.683 18:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.253 18:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:56.253 18:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.253 18:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.253 [2024-12-06 18:10:21.635847] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:56.253 [2024-12-06 18:10:21.635894] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:56.253 18:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.253 18:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:56.253 18:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.253 18:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.253 [2024-12-06 18:10:21.643859] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:56.253 [2024-12-06 18:10:21.643913] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:56.253 [2024-12-06 18:10:21.643929] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:56.254 [2024-12-06 18:10:21.643945] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:56.254 [2024-12-06 18:10:21.643955] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:56.254 [2024-12-06 18:10:21.643969] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:56.254 18:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.254 18:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:56.254 18:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.254 18:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.254 [2024-12-06 18:10:21.688673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:56.254 BaseBdev1 00:11:56.254 18:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.254 18:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:56.254 18:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:56.254 18:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:56.254 18:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:56.254 18:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:56.254 18:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:56.254 18:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:56.254 18:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.254 18:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.254 18:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.254 18:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:56.254 18:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.254 18:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.254 [ 00:11:56.254 { 00:11:56.254 "name": "BaseBdev1", 00:11:56.254 "aliases": [ 00:11:56.254 "5c4a493a-8b49-498d-b9cb-76c28b035c2f" 00:11:56.254 ], 00:11:56.254 "product_name": "Malloc disk", 00:11:56.254 "block_size": 512, 00:11:56.254 "num_blocks": 65536, 00:11:56.254 "uuid": "5c4a493a-8b49-498d-b9cb-76c28b035c2f", 00:11:56.254 "assigned_rate_limits": { 00:11:56.254 "rw_ios_per_sec": 0, 00:11:56.254 "rw_mbytes_per_sec": 0, 00:11:56.254 "r_mbytes_per_sec": 0, 00:11:56.254 "w_mbytes_per_sec": 0 00:11:56.254 }, 00:11:56.254 "claimed": true, 00:11:56.254 "claim_type": "exclusive_write", 00:11:56.254 "zoned": false, 00:11:56.254 "supported_io_types": { 00:11:56.254 "read": true, 00:11:56.254 "write": true, 00:11:56.254 "unmap": true, 00:11:56.254 "flush": true, 00:11:56.254 "reset": true, 00:11:56.254 "nvme_admin": false, 00:11:56.254 "nvme_io": false, 00:11:56.254 "nvme_io_md": false, 00:11:56.254 "write_zeroes": true, 00:11:56.254 "zcopy": true, 00:11:56.254 "get_zone_info": false, 00:11:56.254 "zone_management": false, 00:11:56.254 "zone_append": false, 00:11:56.254 "compare": false, 00:11:56.254 "compare_and_write": false, 00:11:56.254 "abort": true, 00:11:56.254 "seek_hole": false, 00:11:56.254 "seek_data": false, 00:11:56.254 "copy": true, 00:11:56.254 "nvme_iov_md": false 00:11:56.254 }, 00:11:56.254 "memory_domains": [ 00:11:56.254 { 00:11:56.254 "dma_device_id": "system", 00:11:56.254 "dma_device_type": 1 00:11:56.254 }, 00:11:56.254 { 00:11:56.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.254 "dma_device_type": 2 00:11:56.254 } 00:11:56.254 ], 00:11:56.254 "driver_specific": {} 00:11:56.254 } 00:11:56.254 ] 00:11:56.254 18:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.254 18:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:56.254 18:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:56.254 18:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.254 18:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.254 18:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:56.254 18:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:56.254 18:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:56.254 18:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.254 18:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.254 18:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.254 18:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.254 18:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.254 18:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.254 18:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.254 18:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.254 18:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.254 18:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.254 "name": "Existed_Raid", 00:11:56.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.254 "strip_size_kb": 64, 00:11:56.254 "state": "configuring", 00:11:56.254 "raid_level": "concat", 00:11:56.254 "superblock": false, 00:11:56.254 "num_base_bdevs": 3, 00:11:56.254 "num_base_bdevs_discovered": 1, 00:11:56.254 "num_base_bdevs_operational": 3, 00:11:56.254 "base_bdevs_list": [ 00:11:56.254 { 00:11:56.254 "name": "BaseBdev1", 00:11:56.254 "uuid": "5c4a493a-8b49-498d-b9cb-76c28b035c2f", 00:11:56.254 "is_configured": true, 00:11:56.254 "data_offset": 0, 00:11:56.254 "data_size": 65536 00:11:56.254 }, 00:11:56.254 { 00:11:56.254 "name": "BaseBdev2", 00:11:56.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.254 "is_configured": false, 00:11:56.254 "data_offset": 0, 00:11:56.254 "data_size": 0 00:11:56.254 }, 00:11:56.254 { 00:11:56.254 "name": "BaseBdev3", 00:11:56.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.254 "is_configured": false, 00:11:56.254 "data_offset": 0, 00:11:56.254 "data_size": 0 00:11:56.254 } 00:11:56.254 ] 00:11:56.254 }' 00:11:56.254 18:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.254 18:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.824 18:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:56.824 18:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.824 18:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.824 [2024-12-06 18:10:22.196894] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:56.824 [2024-12-06 18:10:22.196970] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:56.824 18:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.824 18:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:56.824 18:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.824 18:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.824 [2024-12-06 18:10:22.204922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:56.824 [2024-12-06 18:10:22.207342] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:56.824 [2024-12-06 18:10:22.207413] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:56.824 [2024-12-06 18:10:22.207429] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:56.824 [2024-12-06 18:10:22.207445] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:56.824 18:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.824 18:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:56.824 18:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:56.824 18:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:56.824 18:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.824 18:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.824 18:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:56.824 18:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:56.824 18:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:56.824 18:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.824 18:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.824 18:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.824 18:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.824 18:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.824 18:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.824 18:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.824 18:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.824 18:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.824 18:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.824 "name": "Existed_Raid", 00:11:56.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.824 "strip_size_kb": 64, 00:11:56.824 "state": "configuring", 00:11:56.824 "raid_level": "concat", 00:11:56.824 "superblock": false, 00:11:56.824 "num_base_bdevs": 3, 00:11:56.824 "num_base_bdevs_discovered": 1, 00:11:56.824 "num_base_bdevs_operational": 3, 00:11:56.824 "base_bdevs_list": [ 00:11:56.824 { 00:11:56.824 "name": "BaseBdev1", 00:11:56.824 "uuid": "5c4a493a-8b49-498d-b9cb-76c28b035c2f", 00:11:56.824 "is_configured": true, 00:11:56.824 "data_offset": 0, 00:11:56.824 "data_size": 65536 00:11:56.824 }, 00:11:56.824 { 00:11:56.824 "name": "BaseBdev2", 00:11:56.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.824 "is_configured": false, 00:11:56.824 "data_offset": 0, 00:11:56.824 "data_size": 0 00:11:56.824 }, 00:11:56.824 { 00:11:56.824 "name": "BaseBdev3", 00:11:56.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.824 "is_configured": false, 00:11:56.824 "data_offset": 0, 00:11:56.824 "data_size": 0 00:11:56.824 } 00:11:56.824 ] 00:11:56.824 }' 00:11:56.824 18:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.824 18:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.392 18:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:57.392 18:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.392 18:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.392 [2024-12-06 18:10:22.767956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:57.392 BaseBdev2 00:11:57.392 18:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.392 18:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:57.392 18:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:57.392 18:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:57.392 18:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:57.392 18:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:57.392 18:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:57.392 18:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:57.392 18:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.392 18:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.392 18:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.392 18:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:57.392 18:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.392 18:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.392 [ 00:11:57.392 { 00:11:57.392 "name": "BaseBdev2", 00:11:57.392 "aliases": [ 00:11:57.392 "bfea8fd2-696f-4ca0-a685-b3a90ad0428d" 00:11:57.392 ], 00:11:57.392 "product_name": "Malloc disk", 00:11:57.392 "block_size": 512, 00:11:57.392 "num_blocks": 65536, 00:11:57.392 "uuid": "bfea8fd2-696f-4ca0-a685-b3a90ad0428d", 00:11:57.392 "assigned_rate_limits": { 00:11:57.392 "rw_ios_per_sec": 0, 00:11:57.392 "rw_mbytes_per_sec": 0, 00:11:57.392 "r_mbytes_per_sec": 0, 00:11:57.392 "w_mbytes_per_sec": 0 00:11:57.392 }, 00:11:57.392 "claimed": true, 00:11:57.392 "claim_type": "exclusive_write", 00:11:57.392 "zoned": false, 00:11:57.392 "supported_io_types": { 00:11:57.393 "read": true, 00:11:57.393 "write": true, 00:11:57.393 "unmap": true, 00:11:57.393 "flush": true, 00:11:57.393 "reset": true, 00:11:57.393 "nvme_admin": false, 00:11:57.393 "nvme_io": false, 00:11:57.393 "nvme_io_md": false, 00:11:57.393 "write_zeroes": true, 00:11:57.393 "zcopy": true, 00:11:57.393 "get_zone_info": false, 00:11:57.393 "zone_management": false, 00:11:57.393 "zone_append": false, 00:11:57.393 "compare": false, 00:11:57.393 "compare_and_write": false, 00:11:57.393 "abort": true, 00:11:57.393 "seek_hole": false, 00:11:57.393 "seek_data": false, 00:11:57.393 "copy": true, 00:11:57.393 "nvme_iov_md": false 00:11:57.393 }, 00:11:57.393 "memory_domains": [ 00:11:57.393 { 00:11:57.393 "dma_device_id": "system", 00:11:57.393 "dma_device_type": 1 00:11:57.393 }, 00:11:57.393 { 00:11:57.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.393 "dma_device_type": 2 00:11:57.393 } 00:11:57.393 ], 00:11:57.393 "driver_specific": {} 00:11:57.393 } 00:11:57.393 ] 00:11:57.393 18:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.393 18:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:57.393 18:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:57.393 18:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:57.393 18:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:57.393 18:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.393 18:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.393 18:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:57.393 18:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.393 18:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:57.393 18:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.393 18:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.393 18:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.393 18:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.393 18:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.393 18:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.393 18:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.393 18:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.393 18:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.393 18:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.393 "name": "Existed_Raid", 00:11:57.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.393 "strip_size_kb": 64, 00:11:57.393 "state": "configuring", 00:11:57.393 "raid_level": "concat", 00:11:57.393 "superblock": false, 00:11:57.393 "num_base_bdevs": 3, 00:11:57.393 "num_base_bdevs_discovered": 2, 00:11:57.393 "num_base_bdevs_operational": 3, 00:11:57.393 "base_bdevs_list": [ 00:11:57.393 { 00:11:57.393 "name": "BaseBdev1", 00:11:57.393 "uuid": "5c4a493a-8b49-498d-b9cb-76c28b035c2f", 00:11:57.393 "is_configured": true, 00:11:57.393 "data_offset": 0, 00:11:57.393 "data_size": 65536 00:11:57.393 }, 00:11:57.393 { 00:11:57.393 "name": "BaseBdev2", 00:11:57.393 "uuid": "bfea8fd2-696f-4ca0-a685-b3a90ad0428d", 00:11:57.393 "is_configured": true, 00:11:57.393 "data_offset": 0, 00:11:57.393 "data_size": 65536 00:11:57.393 }, 00:11:57.393 { 00:11:57.393 "name": "BaseBdev3", 00:11:57.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.393 "is_configured": false, 00:11:57.393 "data_offset": 0, 00:11:57.393 "data_size": 0 00:11:57.393 } 00:11:57.393 ] 00:11:57.393 }' 00:11:57.393 18:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.393 18:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.961 18:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:57.961 18:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.961 18:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.961 [2024-12-06 18:10:23.366278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:57.961 [2024-12-06 18:10:23.366342] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:57.961 [2024-12-06 18:10:23.366363] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:57.961 [2024-12-06 18:10:23.366706] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:57.961 [2024-12-06 18:10:23.366970] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:57.961 [2024-12-06 18:10:23.367005] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:57.961 [2024-12-06 18:10:23.367321] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:57.961 BaseBdev3 00:11:57.961 18:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.961 18:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:57.961 18:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:57.961 18:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:57.961 18:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:57.961 18:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:57.961 18:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:57.961 18:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:57.961 18:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.961 18:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.961 18:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.961 18:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:57.961 18:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.961 18:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.961 [ 00:11:57.961 { 00:11:57.961 "name": "BaseBdev3", 00:11:57.961 "aliases": [ 00:11:57.961 "53188c4f-9a2d-43b0-9e93-d212733dc0a2" 00:11:57.961 ], 00:11:57.961 "product_name": "Malloc disk", 00:11:57.961 "block_size": 512, 00:11:57.961 "num_blocks": 65536, 00:11:57.961 "uuid": "53188c4f-9a2d-43b0-9e93-d212733dc0a2", 00:11:57.961 "assigned_rate_limits": { 00:11:57.961 "rw_ios_per_sec": 0, 00:11:57.961 "rw_mbytes_per_sec": 0, 00:11:57.961 "r_mbytes_per_sec": 0, 00:11:57.961 "w_mbytes_per_sec": 0 00:11:57.961 }, 00:11:57.961 "claimed": true, 00:11:57.961 "claim_type": "exclusive_write", 00:11:57.961 "zoned": false, 00:11:57.961 "supported_io_types": { 00:11:57.961 "read": true, 00:11:57.961 "write": true, 00:11:57.961 "unmap": true, 00:11:57.961 "flush": true, 00:11:57.961 "reset": true, 00:11:57.961 "nvme_admin": false, 00:11:57.961 "nvme_io": false, 00:11:57.961 "nvme_io_md": false, 00:11:57.961 "write_zeroes": true, 00:11:57.961 "zcopy": true, 00:11:57.961 "get_zone_info": false, 00:11:57.961 "zone_management": false, 00:11:57.961 "zone_append": false, 00:11:57.961 "compare": false, 00:11:57.961 "compare_and_write": false, 00:11:57.961 "abort": true, 00:11:57.961 "seek_hole": false, 00:11:57.961 "seek_data": false, 00:11:57.961 "copy": true, 00:11:57.961 "nvme_iov_md": false 00:11:57.961 }, 00:11:57.961 "memory_domains": [ 00:11:57.961 { 00:11:57.961 "dma_device_id": "system", 00:11:57.961 "dma_device_type": 1 00:11:57.961 }, 00:11:57.961 { 00:11:57.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.961 "dma_device_type": 2 00:11:57.961 } 00:11:57.961 ], 00:11:57.961 "driver_specific": {} 00:11:57.961 } 00:11:57.961 ] 00:11:57.961 18:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.961 18:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:57.961 18:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:57.961 18:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:57.961 18:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:57.961 18:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.961 18:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:57.961 18:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:57.961 18:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.961 18:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:57.961 18:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.961 18:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.961 18:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.961 18:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.961 18:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.962 18:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.962 18:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.962 18:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.962 18:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.962 18:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.962 "name": "Existed_Raid", 00:11:57.962 "uuid": "8cb734c8-4bb7-4444-9716-7b5ac613c5bd", 00:11:57.962 "strip_size_kb": 64, 00:11:57.962 "state": "online", 00:11:57.962 "raid_level": "concat", 00:11:57.962 "superblock": false, 00:11:57.962 "num_base_bdevs": 3, 00:11:57.962 "num_base_bdevs_discovered": 3, 00:11:57.962 "num_base_bdevs_operational": 3, 00:11:57.962 "base_bdevs_list": [ 00:11:57.962 { 00:11:57.962 "name": "BaseBdev1", 00:11:57.962 "uuid": "5c4a493a-8b49-498d-b9cb-76c28b035c2f", 00:11:57.962 "is_configured": true, 00:11:57.962 "data_offset": 0, 00:11:57.962 "data_size": 65536 00:11:57.962 }, 00:11:57.962 { 00:11:57.962 "name": "BaseBdev2", 00:11:57.962 "uuid": "bfea8fd2-696f-4ca0-a685-b3a90ad0428d", 00:11:57.962 "is_configured": true, 00:11:57.962 "data_offset": 0, 00:11:57.962 "data_size": 65536 00:11:57.962 }, 00:11:57.962 { 00:11:57.962 "name": "BaseBdev3", 00:11:57.962 "uuid": "53188c4f-9a2d-43b0-9e93-d212733dc0a2", 00:11:57.962 "is_configured": true, 00:11:57.962 "data_offset": 0, 00:11:57.962 "data_size": 65536 00:11:57.962 } 00:11:57.962 ] 00:11:57.962 }' 00:11:57.962 18:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.962 18:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.571 18:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:58.571 18:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:58.571 18:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:58.571 18:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:58.571 18:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:58.571 18:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:58.571 18:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:58.571 18:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:58.571 18:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.571 18:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.571 [2024-12-06 18:10:23.922887] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:58.571 18:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.571 18:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:58.571 "name": "Existed_Raid", 00:11:58.571 "aliases": [ 00:11:58.571 "8cb734c8-4bb7-4444-9716-7b5ac613c5bd" 00:11:58.571 ], 00:11:58.571 "product_name": "Raid Volume", 00:11:58.571 "block_size": 512, 00:11:58.571 "num_blocks": 196608, 00:11:58.571 "uuid": "8cb734c8-4bb7-4444-9716-7b5ac613c5bd", 00:11:58.571 "assigned_rate_limits": { 00:11:58.571 "rw_ios_per_sec": 0, 00:11:58.571 "rw_mbytes_per_sec": 0, 00:11:58.571 "r_mbytes_per_sec": 0, 00:11:58.571 "w_mbytes_per_sec": 0 00:11:58.571 }, 00:11:58.571 "claimed": false, 00:11:58.571 "zoned": false, 00:11:58.571 "supported_io_types": { 00:11:58.571 "read": true, 00:11:58.571 "write": true, 00:11:58.571 "unmap": true, 00:11:58.571 "flush": true, 00:11:58.571 "reset": true, 00:11:58.571 "nvme_admin": false, 00:11:58.571 "nvme_io": false, 00:11:58.571 "nvme_io_md": false, 00:11:58.571 "write_zeroes": true, 00:11:58.571 "zcopy": false, 00:11:58.571 "get_zone_info": false, 00:11:58.571 "zone_management": false, 00:11:58.571 "zone_append": false, 00:11:58.571 "compare": false, 00:11:58.571 "compare_and_write": false, 00:11:58.571 "abort": false, 00:11:58.571 "seek_hole": false, 00:11:58.571 "seek_data": false, 00:11:58.571 "copy": false, 00:11:58.571 "nvme_iov_md": false 00:11:58.571 }, 00:11:58.571 "memory_domains": [ 00:11:58.571 { 00:11:58.571 "dma_device_id": "system", 00:11:58.571 "dma_device_type": 1 00:11:58.571 }, 00:11:58.571 { 00:11:58.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.571 "dma_device_type": 2 00:11:58.571 }, 00:11:58.571 { 00:11:58.571 "dma_device_id": "system", 00:11:58.571 "dma_device_type": 1 00:11:58.571 }, 00:11:58.571 { 00:11:58.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.571 "dma_device_type": 2 00:11:58.571 }, 00:11:58.571 { 00:11:58.571 "dma_device_id": "system", 00:11:58.571 "dma_device_type": 1 00:11:58.571 }, 00:11:58.571 { 00:11:58.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.571 "dma_device_type": 2 00:11:58.571 } 00:11:58.571 ], 00:11:58.571 "driver_specific": { 00:11:58.571 "raid": { 00:11:58.571 "uuid": "8cb734c8-4bb7-4444-9716-7b5ac613c5bd", 00:11:58.571 "strip_size_kb": 64, 00:11:58.571 "state": "online", 00:11:58.571 "raid_level": "concat", 00:11:58.571 "superblock": false, 00:11:58.571 "num_base_bdevs": 3, 00:11:58.571 "num_base_bdevs_discovered": 3, 00:11:58.571 "num_base_bdevs_operational": 3, 00:11:58.571 "base_bdevs_list": [ 00:11:58.571 { 00:11:58.571 "name": "BaseBdev1", 00:11:58.571 "uuid": "5c4a493a-8b49-498d-b9cb-76c28b035c2f", 00:11:58.571 "is_configured": true, 00:11:58.571 "data_offset": 0, 00:11:58.571 "data_size": 65536 00:11:58.571 }, 00:11:58.571 { 00:11:58.571 "name": "BaseBdev2", 00:11:58.571 "uuid": "bfea8fd2-696f-4ca0-a685-b3a90ad0428d", 00:11:58.571 "is_configured": true, 00:11:58.571 "data_offset": 0, 00:11:58.571 "data_size": 65536 00:11:58.571 }, 00:11:58.571 { 00:11:58.571 "name": "BaseBdev3", 00:11:58.571 "uuid": "53188c4f-9a2d-43b0-9e93-d212733dc0a2", 00:11:58.571 "is_configured": true, 00:11:58.571 "data_offset": 0, 00:11:58.571 "data_size": 65536 00:11:58.571 } 00:11:58.571 ] 00:11:58.571 } 00:11:58.571 } 00:11:58.571 }' 00:11:58.571 18:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:58.571 18:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:58.571 BaseBdev2 00:11:58.571 BaseBdev3' 00:11:58.571 18:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:58.571 18:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:58.571 18:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:58.571 18:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:58.571 18:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:58.571 18:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.571 18:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.571 18:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.830 18:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:58.830 18:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:58.830 18:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:58.830 18:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:58.830 18:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:58.830 18:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.830 18:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.830 18:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.830 18:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:58.830 18:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:58.830 18:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:58.830 18:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:58.830 18:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:58.830 18:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.830 18:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.830 18:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.830 18:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:58.830 18:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:58.830 18:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:58.830 18:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.830 18:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.830 [2024-12-06 18:10:24.218601] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:58.830 [2024-12-06 18:10:24.218642] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:58.830 [2024-12-06 18:10:24.218737] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:58.830 18:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.830 18:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:58.830 18:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:58.830 18:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:58.830 18:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:58.830 18:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:58.830 18:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:11:58.830 18:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.830 18:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:58.830 18:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:58.830 18:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:58.830 18:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:58.830 18:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.830 18:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.830 18:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.830 18:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.830 18:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.830 18:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.830 18:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.830 18:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.830 18:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.089 18:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.089 "name": "Existed_Raid", 00:11:59.089 "uuid": "8cb734c8-4bb7-4444-9716-7b5ac613c5bd", 00:11:59.089 "strip_size_kb": 64, 00:11:59.089 "state": "offline", 00:11:59.089 "raid_level": "concat", 00:11:59.089 "superblock": false, 00:11:59.089 "num_base_bdevs": 3, 00:11:59.089 "num_base_bdevs_discovered": 2, 00:11:59.089 "num_base_bdevs_operational": 2, 00:11:59.089 "base_bdevs_list": [ 00:11:59.089 { 00:11:59.089 "name": null, 00:11:59.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.089 "is_configured": false, 00:11:59.089 "data_offset": 0, 00:11:59.089 "data_size": 65536 00:11:59.089 }, 00:11:59.089 { 00:11:59.089 "name": "BaseBdev2", 00:11:59.089 "uuid": "bfea8fd2-696f-4ca0-a685-b3a90ad0428d", 00:11:59.089 "is_configured": true, 00:11:59.089 "data_offset": 0, 00:11:59.089 "data_size": 65536 00:11:59.089 }, 00:11:59.089 { 00:11:59.089 "name": "BaseBdev3", 00:11:59.089 "uuid": "53188c4f-9a2d-43b0-9e93-d212733dc0a2", 00:11:59.089 "is_configured": true, 00:11:59.089 "data_offset": 0, 00:11:59.089 "data_size": 65536 00:11:59.089 } 00:11:59.089 ] 00:11:59.089 }' 00:11:59.089 18:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.089 18:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.347 18:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:59.347 18:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:59.348 18:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.348 18:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.348 18:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.348 18:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:59.348 18:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.348 18:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:59.348 18:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:59.348 18:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:59.348 18:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.348 18:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.348 [2024-12-06 18:10:24.864520] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:59.607 18:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.607 18:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:59.607 18:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:59.607 18:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.607 18:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.607 18:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:59.607 18:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.607 18:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.607 18:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:59.607 18:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:59.607 18:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:59.607 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.607 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.607 [2024-12-06 18:10:25.009143] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:59.607 [2024-12-06 18:10:25.009213] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:59.607 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.607 18:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:59.607 18:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:59.607 18:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.607 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.607 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.607 18:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:59.607 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.866 BaseBdev2 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.866 [ 00:11:59.866 { 00:11:59.866 "name": "BaseBdev2", 00:11:59.866 "aliases": [ 00:11:59.866 "b7ab7b80-fd24-483e-9626-9cef33142a0d" 00:11:59.866 ], 00:11:59.866 "product_name": "Malloc disk", 00:11:59.866 "block_size": 512, 00:11:59.866 "num_blocks": 65536, 00:11:59.866 "uuid": "b7ab7b80-fd24-483e-9626-9cef33142a0d", 00:11:59.866 "assigned_rate_limits": { 00:11:59.866 "rw_ios_per_sec": 0, 00:11:59.866 "rw_mbytes_per_sec": 0, 00:11:59.866 "r_mbytes_per_sec": 0, 00:11:59.866 "w_mbytes_per_sec": 0 00:11:59.866 }, 00:11:59.866 "claimed": false, 00:11:59.866 "zoned": false, 00:11:59.866 "supported_io_types": { 00:11:59.866 "read": true, 00:11:59.866 "write": true, 00:11:59.866 "unmap": true, 00:11:59.866 "flush": true, 00:11:59.866 "reset": true, 00:11:59.866 "nvme_admin": false, 00:11:59.866 "nvme_io": false, 00:11:59.866 "nvme_io_md": false, 00:11:59.866 "write_zeroes": true, 00:11:59.866 "zcopy": true, 00:11:59.866 "get_zone_info": false, 00:11:59.866 "zone_management": false, 00:11:59.866 "zone_append": false, 00:11:59.866 "compare": false, 00:11:59.866 "compare_and_write": false, 00:11:59.866 "abort": true, 00:11:59.866 "seek_hole": false, 00:11:59.866 "seek_data": false, 00:11:59.866 "copy": true, 00:11:59.866 "nvme_iov_md": false 00:11:59.866 }, 00:11:59.866 "memory_domains": [ 00:11:59.866 { 00:11:59.866 "dma_device_id": "system", 00:11:59.866 "dma_device_type": 1 00:11:59.866 }, 00:11:59.866 { 00:11:59.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.866 "dma_device_type": 2 00:11:59.866 } 00:11:59.866 ], 00:11:59.866 "driver_specific": {} 00:11:59.866 } 00:11:59.866 ] 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.866 BaseBdev3 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.866 [ 00:11:59.866 { 00:11:59.866 "name": "BaseBdev3", 00:11:59.866 "aliases": [ 00:11:59.866 "98cdf1b8-1980-4cab-911a-023ab24456b1" 00:11:59.866 ], 00:11:59.866 "product_name": "Malloc disk", 00:11:59.866 "block_size": 512, 00:11:59.866 "num_blocks": 65536, 00:11:59.866 "uuid": "98cdf1b8-1980-4cab-911a-023ab24456b1", 00:11:59.866 "assigned_rate_limits": { 00:11:59.866 "rw_ios_per_sec": 0, 00:11:59.866 "rw_mbytes_per_sec": 0, 00:11:59.866 "r_mbytes_per_sec": 0, 00:11:59.866 "w_mbytes_per_sec": 0 00:11:59.866 }, 00:11:59.866 "claimed": false, 00:11:59.866 "zoned": false, 00:11:59.866 "supported_io_types": { 00:11:59.866 "read": true, 00:11:59.866 "write": true, 00:11:59.866 "unmap": true, 00:11:59.866 "flush": true, 00:11:59.866 "reset": true, 00:11:59.866 "nvme_admin": false, 00:11:59.866 "nvme_io": false, 00:11:59.866 "nvme_io_md": false, 00:11:59.866 "write_zeroes": true, 00:11:59.866 "zcopy": true, 00:11:59.866 "get_zone_info": false, 00:11:59.866 "zone_management": false, 00:11:59.866 "zone_append": false, 00:11:59.866 "compare": false, 00:11:59.866 "compare_and_write": false, 00:11:59.866 "abort": true, 00:11:59.866 "seek_hole": false, 00:11:59.866 "seek_data": false, 00:11:59.866 "copy": true, 00:11:59.866 "nvme_iov_md": false 00:11:59.866 }, 00:11:59.866 "memory_domains": [ 00:11:59.866 { 00:11:59.866 "dma_device_id": "system", 00:11:59.866 "dma_device_type": 1 00:11:59.866 }, 00:11:59.866 { 00:11:59.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.866 "dma_device_type": 2 00:11:59.866 } 00:11:59.866 ], 00:11:59.866 "driver_specific": {} 00:11:59.866 } 00:11:59.866 ] 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.866 [2024-12-06 18:10:25.285169] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:59.866 [2024-12-06 18:10:25.285226] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:59.866 [2024-12-06 18:10:25.285257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:59.866 [2024-12-06 18:10:25.287643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:59.866 18:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:59.867 18:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:59.867 18:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:59.867 18:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.867 18:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.867 18:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.867 18:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.867 18:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.867 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.867 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.867 18:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.867 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.867 18:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.867 "name": "Existed_Raid", 00:11:59.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.867 "strip_size_kb": 64, 00:11:59.867 "state": "configuring", 00:11:59.867 "raid_level": "concat", 00:11:59.867 "superblock": false, 00:11:59.867 "num_base_bdevs": 3, 00:11:59.867 "num_base_bdevs_discovered": 2, 00:11:59.867 "num_base_bdevs_operational": 3, 00:11:59.867 "base_bdevs_list": [ 00:11:59.867 { 00:11:59.867 "name": "BaseBdev1", 00:11:59.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.867 "is_configured": false, 00:11:59.867 "data_offset": 0, 00:11:59.867 "data_size": 0 00:11:59.867 }, 00:11:59.867 { 00:11:59.867 "name": "BaseBdev2", 00:11:59.867 "uuid": "b7ab7b80-fd24-483e-9626-9cef33142a0d", 00:11:59.867 "is_configured": true, 00:11:59.867 "data_offset": 0, 00:11:59.867 "data_size": 65536 00:11:59.867 }, 00:11:59.867 { 00:11:59.867 "name": "BaseBdev3", 00:11:59.867 "uuid": "98cdf1b8-1980-4cab-911a-023ab24456b1", 00:11:59.867 "is_configured": true, 00:11:59.867 "data_offset": 0, 00:11:59.867 "data_size": 65536 00:11:59.867 } 00:11:59.867 ] 00:11:59.867 }' 00:11:59.867 18:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.867 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.433 18:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:00.433 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.433 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.433 [2024-12-06 18:10:25.817372] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:00.433 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.433 18:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:00.433 18:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:00.433 18:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:00.433 18:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:00.433 18:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:00.433 18:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:00.433 18:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.433 18:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.433 18:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.433 18:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.433 18:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.433 18:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:00.433 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.433 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.433 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.433 18:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.433 "name": "Existed_Raid", 00:12:00.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.433 "strip_size_kb": 64, 00:12:00.433 "state": "configuring", 00:12:00.433 "raid_level": "concat", 00:12:00.433 "superblock": false, 00:12:00.433 "num_base_bdevs": 3, 00:12:00.433 "num_base_bdevs_discovered": 1, 00:12:00.433 "num_base_bdevs_operational": 3, 00:12:00.433 "base_bdevs_list": [ 00:12:00.433 { 00:12:00.433 "name": "BaseBdev1", 00:12:00.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.433 "is_configured": false, 00:12:00.433 "data_offset": 0, 00:12:00.433 "data_size": 0 00:12:00.433 }, 00:12:00.433 { 00:12:00.433 "name": null, 00:12:00.433 "uuid": "b7ab7b80-fd24-483e-9626-9cef33142a0d", 00:12:00.433 "is_configured": false, 00:12:00.433 "data_offset": 0, 00:12:00.433 "data_size": 65536 00:12:00.433 }, 00:12:00.433 { 00:12:00.433 "name": "BaseBdev3", 00:12:00.433 "uuid": "98cdf1b8-1980-4cab-911a-023ab24456b1", 00:12:00.433 "is_configured": true, 00:12:00.433 "data_offset": 0, 00:12:00.433 "data_size": 65536 00:12:00.433 } 00:12:00.433 ] 00:12:00.433 }' 00:12:00.433 18:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.433 18:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.001 18:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:01.001 18:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.001 18:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.001 18:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.001 18:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.001 18:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:01.001 18:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:01.001 18:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.001 18:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.001 [2024-12-06 18:10:26.420813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:01.001 BaseBdev1 00:12:01.001 18:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.001 18:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:01.001 18:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:01.001 18:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:01.001 18:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:01.001 18:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:01.001 18:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:01.001 18:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:01.001 18:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.001 18:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.001 18:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.001 18:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:01.001 18:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.001 18:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.001 [ 00:12:01.001 { 00:12:01.001 "name": "BaseBdev1", 00:12:01.001 "aliases": [ 00:12:01.001 "416de814-e2e9-45ae-a553-df0704a986c2" 00:12:01.001 ], 00:12:01.001 "product_name": "Malloc disk", 00:12:01.001 "block_size": 512, 00:12:01.001 "num_blocks": 65536, 00:12:01.001 "uuid": "416de814-e2e9-45ae-a553-df0704a986c2", 00:12:01.001 "assigned_rate_limits": { 00:12:01.001 "rw_ios_per_sec": 0, 00:12:01.001 "rw_mbytes_per_sec": 0, 00:12:01.001 "r_mbytes_per_sec": 0, 00:12:01.001 "w_mbytes_per_sec": 0 00:12:01.001 }, 00:12:01.001 "claimed": true, 00:12:01.001 "claim_type": "exclusive_write", 00:12:01.001 "zoned": false, 00:12:01.001 "supported_io_types": { 00:12:01.001 "read": true, 00:12:01.001 "write": true, 00:12:01.001 "unmap": true, 00:12:01.001 "flush": true, 00:12:01.001 "reset": true, 00:12:01.001 "nvme_admin": false, 00:12:01.001 "nvme_io": false, 00:12:01.001 "nvme_io_md": false, 00:12:01.001 "write_zeroes": true, 00:12:01.001 "zcopy": true, 00:12:01.001 "get_zone_info": false, 00:12:01.001 "zone_management": false, 00:12:01.001 "zone_append": false, 00:12:01.001 "compare": false, 00:12:01.001 "compare_and_write": false, 00:12:01.001 "abort": true, 00:12:01.001 "seek_hole": false, 00:12:01.001 "seek_data": false, 00:12:01.001 "copy": true, 00:12:01.001 "nvme_iov_md": false 00:12:01.001 }, 00:12:01.001 "memory_domains": [ 00:12:01.001 { 00:12:01.001 "dma_device_id": "system", 00:12:01.001 "dma_device_type": 1 00:12:01.001 }, 00:12:01.001 { 00:12:01.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.001 "dma_device_type": 2 00:12:01.001 } 00:12:01.001 ], 00:12:01.001 "driver_specific": {} 00:12:01.001 } 00:12:01.001 ] 00:12:01.001 18:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.001 18:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:01.001 18:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:01.001 18:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.001 18:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.001 18:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:01.001 18:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:01.001 18:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:01.001 18:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.001 18:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.001 18:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.001 18:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.001 18:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.001 18:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.001 18:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.001 18:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.001 18:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.001 18:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.001 "name": "Existed_Raid", 00:12:01.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.001 "strip_size_kb": 64, 00:12:01.001 "state": "configuring", 00:12:01.001 "raid_level": "concat", 00:12:01.001 "superblock": false, 00:12:01.001 "num_base_bdevs": 3, 00:12:01.001 "num_base_bdevs_discovered": 2, 00:12:01.001 "num_base_bdevs_operational": 3, 00:12:01.001 "base_bdevs_list": [ 00:12:01.001 { 00:12:01.001 "name": "BaseBdev1", 00:12:01.001 "uuid": "416de814-e2e9-45ae-a553-df0704a986c2", 00:12:01.001 "is_configured": true, 00:12:01.001 "data_offset": 0, 00:12:01.001 "data_size": 65536 00:12:01.001 }, 00:12:01.001 { 00:12:01.001 "name": null, 00:12:01.001 "uuid": "b7ab7b80-fd24-483e-9626-9cef33142a0d", 00:12:01.001 "is_configured": false, 00:12:01.001 "data_offset": 0, 00:12:01.001 "data_size": 65536 00:12:01.001 }, 00:12:01.001 { 00:12:01.001 "name": "BaseBdev3", 00:12:01.001 "uuid": "98cdf1b8-1980-4cab-911a-023ab24456b1", 00:12:01.001 "is_configured": true, 00:12:01.001 "data_offset": 0, 00:12:01.001 "data_size": 65536 00:12:01.001 } 00:12:01.001 ] 00:12:01.001 }' 00:12:01.001 18:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.001 18:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.569 18:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.569 18:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:01.569 18:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.569 18:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.569 18:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.569 18:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:01.569 18:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:01.569 18:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.569 18:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.569 [2024-12-06 18:10:27.001041] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:01.569 18:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.569 18:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:01.569 18:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.569 18:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.569 18:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:01.569 18:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:01.569 18:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:01.569 18:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.569 18:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.569 18:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.569 18:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.569 18:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.569 18:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.569 18:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.569 18:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.569 18:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.569 18:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.569 "name": "Existed_Raid", 00:12:01.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.569 "strip_size_kb": 64, 00:12:01.569 "state": "configuring", 00:12:01.569 "raid_level": "concat", 00:12:01.569 "superblock": false, 00:12:01.569 "num_base_bdevs": 3, 00:12:01.569 "num_base_bdevs_discovered": 1, 00:12:01.569 "num_base_bdevs_operational": 3, 00:12:01.569 "base_bdevs_list": [ 00:12:01.569 { 00:12:01.569 "name": "BaseBdev1", 00:12:01.569 "uuid": "416de814-e2e9-45ae-a553-df0704a986c2", 00:12:01.569 "is_configured": true, 00:12:01.569 "data_offset": 0, 00:12:01.569 "data_size": 65536 00:12:01.569 }, 00:12:01.569 { 00:12:01.569 "name": null, 00:12:01.569 "uuid": "b7ab7b80-fd24-483e-9626-9cef33142a0d", 00:12:01.569 "is_configured": false, 00:12:01.569 "data_offset": 0, 00:12:01.569 "data_size": 65536 00:12:01.569 }, 00:12:01.569 { 00:12:01.569 "name": null, 00:12:01.569 "uuid": "98cdf1b8-1980-4cab-911a-023ab24456b1", 00:12:01.569 "is_configured": false, 00:12:01.569 "data_offset": 0, 00:12:01.569 "data_size": 65536 00:12:01.569 } 00:12:01.569 ] 00:12:01.569 }' 00:12:01.569 18:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.569 18:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.138 18:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:02.138 18:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.138 18:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.138 18:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.138 18:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.138 18:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:02.138 18:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:02.138 18:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.138 18:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.138 [2024-12-06 18:10:27.609222] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:02.138 18:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.138 18:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:02.138 18:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.138 18:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.138 18:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:02.138 18:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:02.138 18:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:02.138 18:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.138 18:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.138 18:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.138 18:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.138 18:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.138 18:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.138 18:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.138 18:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.138 18:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.397 18:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.397 "name": "Existed_Raid", 00:12:02.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.397 "strip_size_kb": 64, 00:12:02.397 "state": "configuring", 00:12:02.397 "raid_level": "concat", 00:12:02.397 "superblock": false, 00:12:02.397 "num_base_bdevs": 3, 00:12:02.397 "num_base_bdevs_discovered": 2, 00:12:02.397 "num_base_bdevs_operational": 3, 00:12:02.397 "base_bdevs_list": [ 00:12:02.397 { 00:12:02.397 "name": "BaseBdev1", 00:12:02.397 "uuid": "416de814-e2e9-45ae-a553-df0704a986c2", 00:12:02.397 "is_configured": true, 00:12:02.397 "data_offset": 0, 00:12:02.397 "data_size": 65536 00:12:02.397 }, 00:12:02.397 { 00:12:02.397 "name": null, 00:12:02.397 "uuid": "b7ab7b80-fd24-483e-9626-9cef33142a0d", 00:12:02.397 "is_configured": false, 00:12:02.397 "data_offset": 0, 00:12:02.397 "data_size": 65536 00:12:02.397 }, 00:12:02.397 { 00:12:02.397 "name": "BaseBdev3", 00:12:02.397 "uuid": "98cdf1b8-1980-4cab-911a-023ab24456b1", 00:12:02.397 "is_configured": true, 00:12:02.397 "data_offset": 0, 00:12:02.397 "data_size": 65536 00:12:02.397 } 00:12:02.397 ] 00:12:02.397 }' 00:12:02.397 18:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.397 18:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.656 18:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.656 18:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.656 18:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.656 18:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:02.657 18:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.915 18:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:02.915 18:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:02.915 18:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.915 18:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.915 [2024-12-06 18:10:28.201552] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:02.915 18:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.915 18:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:02.915 18:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.915 18:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.915 18:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:02.915 18:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:02.915 18:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:02.915 18:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.915 18:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.915 18:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.916 18:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.916 18:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.916 18:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.916 18:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.916 18:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.916 18:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.916 18:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.916 "name": "Existed_Raid", 00:12:02.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.916 "strip_size_kb": 64, 00:12:02.916 "state": "configuring", 00:12:02.916 "raid_level": "concat", 00:12:02.916 "superblock": false, 00:12:02.916 "num_base_bdevs": 3, 00:12:02.916 "num_base_bdevs_discovered": 1, 00:12:02.916 "num_base_bdevs_operational": 3, 00:12:02.916 "base_bdevs_list": [ 00:12:02.916 { 00:12:02.916 "name": null, 00:12:02.916 "uuid": "416de814-e2e9-45ae-a553-df0704a986c2", 00:12:02.916 "is_configured": false, 00:12:02.916 "data_offset": 0, 00:12:02.916 "data_size": 65536 00:12:02.916 }, 00:12:02.916 { 00:12:02.916 "name": null, 00:12:02.916 "uuid": "b7ab7b80-fd24-483e-9626-9cef33142a0d", 00:12:02.916 "is_configured": false, 00:12:02.916 "data_offset": 0, 00:12:02.916 "data_size": 65536 00:12:02.916 }, 00:12:02.916 { 00:12:02.916 "name": "BaseBdev3", 00:12:02.916 "uuid": "98cdf1b8-1980-4cab-911a-023ab24456b1", 00:12:02.916 "is_configured": true, 00:12:02.916 "data_offset": 0, 00:12:02.916 "data_size": 65536 00:12:02.916 } 00:12:02.916 ] 00:12:02.916 }' 00:12:02.916 18:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.916 18:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.556 18:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.556 18:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:03.556 18:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.556 18:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.556 18:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.556 18:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:03.556 18:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:03.556 18:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.556 18:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.556 [2024-12-06 18:10:28.847347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:03.556 18:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.556 18:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:03.556 18:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.556 18:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.556 18:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:03.556 18:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:03.556 18:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:03.556 18:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.556 18:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.556 18:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.556 18:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.556 18:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.556 18:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.556 18:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.556 18:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.556 18:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.556 18:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.556 "name": "Existed_Raid", 00:12:03.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.556 "strip_size_kb": 64, 00:12:03.556 "state": "configuring", 00:12:03.556 "raid_level": "concat", 00:12:03.556 "superblock": false, 00:12:03.556 "num_base_bdevs": 3, 00:12:03.556 "num_base_bdevs_discovered": 2, 00:12:03.556 "num_base_bdevs_operational": 3, 00:12:03.556 "base_bdevs_list": [ 00:12:03.556 { 00:12:03.556 "name": null, 00:12:03.556 "uuid": "416de814-e2e9-45ae-a553-df0704a986c2", 00:12:03.556 "is_configured": false, 00:12:03.556 "data_offset": 0, 00:12:03.556 "data_size": 65536 00:12:03.556 }, 00:12:03.556 { 00:12:03.556 "name": "BaseBdev2", 00:12:03.556 "uuid": "b7ab7b80-fd24-483e-9626-9cef33142a0d", 00:12:03.556 "is_configured": true, 00:12:03.556 "data_offset": 0, 00:12:03.556 "data_size": 65536 00:12:03.556 }, 00:12:03.556 { 00:12:03.556 "name": "BaseBdev3", 00:12:03.556 "uuid": "98cdf1b8-1980-4cab-911a-023ab24456b1", 00:12:03.556 "is_configured": true, 00:12:03.556 "data_offset": 0, 00:12:03.556 "data_size": 65536 00:12:03.556 } 00:12:03.556 ] 00:12:03.556 }' 00:12:03.557 18:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.557 18:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.123 18:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.123 18:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:04.123 18:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.123 18:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.123 18:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.123 18:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:04.123 18:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:04.123 18:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.123 18:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.123 18:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.123 18:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.123 18:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 416de814-e2e9-45ae-a553-df0704a986c2 00:12:04.123 18:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.123 18:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.123 [2024-12-06 18:10:29.513021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:04.123 [2024-12-06 18:10:29.513074] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:04.123 [2024-12-06 18:10:29.513090] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:04.123 [2024-12-06 18:10:29.513404] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:04.123 [2024-12-06 18:10:29.513597] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:04.123 [2024-12-06 18:10:29.513613] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:04.123 NewBaseBdev 00:12:04.123 [2024-12-06 18:10:29.513948] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:04.123 18:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.123 18:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:04.123 18:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:04.123 18:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:04.123 18:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:04.123 18:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:04.123 18:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:04.123 18:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:04.123 18:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.123 18:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.123 18:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.123 18:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:04.123 18:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.123 18:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.123 [ 00:12:04.123 { 00:12:04.123 "name": "NewBaseBdev", 00:12:04.123 "aliases": [ 00:12:04.123 "416de814-e2e9-45ae-a553-df0704a986c2" 00:12:04.123 ], 00:12:04.123 "product_name": "Malloc disk", 00:12:04.123 "block_size": 512, 00:12:04.123 "num_blocks": 65536, 00:12:04.123 "uuid": "416de814-e2e9-45ae-a553-df0704a986c2", 00:12:04.124 "assigned_rate_limits": { 00:12:04.124 "rw_ios_per_sec": 0, 00:12:04.124 "rw_mbytes_per_sec": 0, 00:12:04.124 "r_mbytes_per_sec": 0, 00:12:04.124 "w_mbytes_per_sec": 0 00:12:04.124 }, 00:12:04.124 "claimed": true, 00:12:04.124 "claim_type": "exclusive_write", 00:12:04.124 "zoned": false, 00:12:04.124 "supported_io_types": { 00:12:04.124 "read": true, 00:12:04.124 "write": true, 00:12:04.124 "unmap": true, 00:12:04.124 "flush": true, 00:12:04.124 "reset": true, 00:12:04.124 "nvme_admin": false, 00:12:04.124 "nvme_io": false, 00:12:04.124 "nvme_io_md": false, 00:12:04.124 "write_zeroes": true, 00:12:04.124 "zcopy": true, 00:12:04.124 "get_zone_info": false, 00:12:04.124 "zone_management": false, 00:12:04.124 "zone_append": false, 00:12:04.124 "compare": false, 00:12:04.124 "compare_and_write": false, 00:12:04.124 "abort": true, 00:12:04.124 "seek_hole": false, 00:12:04.124 "seek_data": false, 00:12:04.124 "copy": true, 00:12:04.124 "nvme_iov_md": false 00:12:04.124 }, 00:12:04.124 "memory_domains": [ 00:12:04.124 { 00:12:04.124 "dma_device_id": "system", 00:12:04.124 "dma_device_type": 1 00:12:04.124 }, 00:12:04.124 { 00:12:04.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.124 "dma_device_type": 2 00:12:04.124 } 00:12:04.124 ], 00:12:04.124 "driver_specific": {} 00:12:04.124 } 00:12:04.124 ] 00:12:04.124 18:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.124 18:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:04.124 18:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:04.124 18:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.124 18:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:04.124 18:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:04.124 18:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:04.124 18:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:04.124 18:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.124 18:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.124 18:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.124 18:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.124 18:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.124 18:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.124 18:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.124 18:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.124 18:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.124 18:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.124 "name": "Existed_Raid", 00:12:04.124 "uuid": "b8256708-5b90-4890-a3ec-db4b76e3081c", 00:12:04.124 "strip_size_kb": 64, 00:12:04.124 "state": "online", 00:12:04.124 "raid_level": "concat", 00:12:04.124 "superblock": false, 00:12:04.124 "num_base_bdevs": 3, 00:12:04.124 "num_base_bdevs_discovered": 3, 00:12:04.124 "num_base_bdevs_operational": 3, 00:12:04.124 "base_bdevs_list": [ 00:12:04.124 { 00:12:04.124 "name": "NewBaseBdev", 00:12:04.124 "uuid": "416de814-e2e9-45ae-a553-df0704a986c2", 00:12:04.124 "is_configured": true, 00:12:04.124 "data_offset": 0, 00:12:04.124 "data_size": 65536 00:12:04.124 }, 00:12:04.124 { 00:12:04.124 "name": "BaseBdev2", 00:12:04.124 "uuid": "b7ab7b80-fd24-483e-9626-9cef33142a0d", 00:12:04.124 "is_configured": true, 00:12:04.124 "data_offset": 0, 00:12:04.124 "data_size": 65536 00:12:04.124 }, 00:12:04.124 { 00:12:04.124 "name": "BaseBdev3", 00:12:04.124 "uuid": "98cdf1b8-1980-4cab-911a-023ab24456b1", 00:12:04.124 "is_configured": true, 00:12:04.124 "data_offset": 0, 00:12:04.124 "data_size": 65536 00:12:04.124 } 00:12:04.124 ] 00:12:04.124 }' 00:12:04.124 18:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.124 18:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.689 18:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:04.689 18:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:04.689 18:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:04.689 18:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:04.689 18:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:04.689 18:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:04.689 18:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:04.689 18:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.689 18:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.689 18:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:04.689 [2024-12-06 18:10:30.045570] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:04.689 18:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.689 18:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:04.689 "name": "Existed_Raid", 00:12:04.689 "aliases": [ 00:12:04.689 "b8256708-5b90-4890-a3ec-db4b76e3081c" 00:12:04.689 ], 00:12:04.689 "product_name": "Raid Volume", 00:12:04.689 "block_size": 512, 00:12:04.689 "num_blocks": 196608, 00:12:04.689 "uuid": "b8256708-5b90-4890-a3ec-db4b76e3081c", 00:12:04.689 "assigned_rate_limits": { 00:12:04.689 "rw_ios_per_sec": 0, 00:12:04.689 "rw_mbytes_per_sec": 0, 00:12:04.689 "r_mbytes_per_sec": 0, 00:12:04.689 "w_mbytes_per_sec": 0 00:12:04.689 }, 00:12:04.689 "claimed": false, 00:12:04.689 "zoned": false, 00:12:04.689 "supported_io_types": { 00:12:04.689 "read": true, 00:12:04.689 "write": true, 00:12:04.689 "unmap": true, 00:12:04.689 "flush": true, 00:12:04.689 "reset": true, 00:12:04.689 "nvme_admin": false, 00:12:04.689 "nvme_io": false, 00:12:04.689 "nvme_io_md": false, 00:12:04.689 "write_zeroes": true, 00:12:04.689 "zcopy": false, 00:12:04.689 "get_zone_info": false, 00:12:04.689 "zone_management": false, 00:12:04.689 "zone_append": false, 00:12:04.689 "compare": false, 00:12:04.689 "compare_and_write": false, 00:12:04.689 "abort": false, 00:12:04.689 "seek_hole": false, 00:12:04.689 "seek_data": false, 00:12:04.689 "copy": false, 00:12:04.689 "nvme_iov_md": false 00:12:04.689 }, 00:12:04.689 "memory_domains": [ 00:12:04.689 { 00:12:04.689 "dma_device_id": "system", 00:12:04.689 "dma_device_type": 1 00:12:04.689 }, 00:12:04.689 { 00:12:04.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.689 "dma_device_type": 2 00:12:04.689 }, 00:12:04.689 { 00:12:04.689 "dma_device_id": "system", 00:12:04.689 "dma_device_type": 1 00:12:04.689 }, 00:12:04.689 { 00:12:04.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.689 "dma_device_type": 2 00:12:04.689 }, 00:12:04.689 { 00:12:04.689 "dma_device_id": "system", 00:12:04.689 "dma_device_type": 1 00:12:04.689 }, 00:12:04.689 { 00:12:04.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.689 "dma_device_type": 2 00:12:04.689 } 00:12:04.689 ], 00:12:04.689 "driver_specific": { 00:12:04.689 "raid": { 00:12:04.689 "uuid": "b8256708-5b90-4890-a3ec-db4b76e3081c", 00:12:04.689 "strip_size_kb": 64, 00:12:04.689 "state": "online", 00:12:04.689 "raid_level": "concat", 00:12:04.689 "superblock": false, 00:12:04.689 "num_base_bdevs": 3, 00:12:04.689 "num_base_bdevs_discovered": 3, 00:12:04.689 "num_base_bdevs_operational": 3, 00:12:04.689 "base_bdevs_list": [ 00:12:04.689 { 00:12:04.689 "name": "NewBaseBdev", 00:12:04.689 "uuid": "416de814-e2e9-45ae-a553-df0704a986c2", 00:12:04.689 "is_configured": true, 00:12:04.689 "data_offset": 0, 00:12:04.689 "data_size": 65536 00:12:04.689 }, 00:12:04.689 { 00:12:04.689 "name": "BaseBdev2", 00:12:04.689 "uuid": "b7ab7b80-fd24-483e-9626-9cef33142a0d", 00:12:04.689 "is_configured": true, 00:12:04.689 "data_offset": 0, 00:12:04.689 "data_size": 65536 00:12:04.689 }, 00:12:04.689 { 00:12:04.689 "name": "BaseBdev3", 00:12:04.689 "uuid": "98cdf1b8-1980-4cab-911a-023ab24456b1", 00:12:04.690 "is_configured": true, 00:12:04.690 "data_offset": 0, 00:12:04.690 "data_size": 65536 00:12:04.690 } 00:12:04.690 ] 00:12:04.690 } 00:12:04.690 } 00:12:04.690 }' 00:12:04.690 18:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:04.690 18:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:04.690 BaseBdev2 00:12:04.690 BaseBdev3' 00:12:04.690 18:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:04.690 18:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:04.690 18:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:04.690 18:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:04.690 18:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:04.690 18:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.690 18:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.948 18:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.948 18:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:04.948 18:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:04.948 18:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:04.948 18:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:04.948 18:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:04.948 18:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.948 18:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.948 18:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.948 18:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:04.948 18:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:04.948 18:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:04.948 18:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:04.948 18:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:04.948 18:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.948 18:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.949 18:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.949 18:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:04.949 18:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:04.949 18:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:04.949 18:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.949 18:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.949 [2024-12-06 18:10:30.357502] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:04.949 [2024-12-06 18:10:30.357661] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:04.949 [2024-12-06 18:10:30.357883] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:04.949 [2024-12-06 18:10:30.358072] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:04.949 [2024-12-06 18:10:30.358106] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:04.949 18:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.949 18:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65690 00:12:04.949 18:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65690 ']' 00:12:04.949 18:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65690 00:12:04.949 18:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:04.949 18:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:04.949 18:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65690 00:12:04.949 killing process with pid 65690 00:12:04.949 18:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:04.949 18:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:04.949 18:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65690' 00:12:04.949 18:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65690 00:12:04.949 [2024-12-06 18:10:30.395214] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:04.949 18:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65690 00:12:05.207 [2024-12-06 18:10:30.665080] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:06.614 ************************************ 00:12:06.614 END TEST raid_state_function_test 00:12:06.614 ************************************ 00:12:06.614 18:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:06.614 00:12:06.614 real 0m11.634s 00:12:06.614 user 0m19.342s 00:12:06.614 sys 0m1.568s 00:12:06.614 18:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:06.614 18:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.614 18:10:31 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:12:06.614 18:10:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:06.614 18:10:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:06.614 18:10:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:06.614 ************************************ 00:12:06.615 START TEST raid_state_function_test_sb 00:12:06.615 ************************************ 00:12:06.615 18:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:12:06.615 18:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:06.615 18:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:06.615 18:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:06.615 18:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:06.615 18:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:06.615 18:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:06.615 18:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:06.615 18:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:06.615 18:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:06.615 18:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:06.615 18:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:06.615 18:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:06.615 18:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:06.615 18:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:06.615 18:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:06.615 18:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:06.615 18:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:06.615 18:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:06.615 18:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:06.615 18:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:06.615 18:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:06.615 18:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:06.615 18:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:06.615 18:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:06.615 18:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:06.615 18:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:06.615 Process raid pid: 66322 00:12:06.615 18:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66322 00:12:06.615 18:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66322' 00:12:06.615 18:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66322 00:12:06.615 18:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:06.615 18:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66322 ']' 00:12:06.615 18:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.615 18:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:06.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:06.615 18:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.615 18:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:06.615 18:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.615 [2024-12-06 18:10:31.883390] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:12:06.615 [2024-12-06 18:10:31.883595] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:06.615 [2024-12-06 18:10:32.067010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.873 [2024-12-06 18:10:32.196575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.131 [2024-12-06 18:10:32.401307] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:07.131 [2024-12-06 18:10:32.401352] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:07.390 18:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:07.390 18:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:07.390 18:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:07.390 18:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.390 18:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.390 [2024-12-06 18:10:32.838266] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:07.390 [2024-12-06 18:10:32.838501] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:07.390 [2024-12-06 18:10:32.838632] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:07.390 [2024-12-06 18:10:32.838786] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:07.390 [2024-12-06 18:10:32.838903] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:07.390 [2024-12-06 18:10:32.839027] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:07.390 18:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.390 18:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:07.390 18:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.390 18:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.390 18:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:07.390 18:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:07.390 18:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:07.390 18:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.390 18:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.390 18:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.390 18:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.390 18:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.390 18:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.390 18:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.390 18:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.390 18:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.390 18:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.390 "name": "Existed_Raid", 00:12:07.390 "uuid": "e2d74559-65c6-4b18-936c-76eeff13e70f", 00:12:07.390 "strip_size_kb": 64, 00:12:07.390 "state": "configuring", 00:12:07.390 "raid_level": "concat", 00:12:07.390 "superblock": true, 00:12:07.390 "num_base_bdevs": 3, 00:12:07.390 "num_base_bdevs_discovered": 0, 00:12:07.390 "num_base_bdevs_operational": 3, 00:12:07.390 "base_bdevs_list": [ 00:12:07.390 { 00:12:07.390 "name": "BaseBdev1", 00:12:07.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.390 "is_configured": false, 00:12:07.390 "data_offset": 0, 00:12:07.390 "data_size": 0 00:12:07.390 }, 00:12:07.390 { 00:12:07.390 "name": "BaseBdev2", 00:12:07.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.390 "is_configured": false, 00:12:07.390 "data_offset": 0, 00:12:07.390 "data_size": 0 00:12:07.390 }, 00:12:07.390 { 00:12:07.390 "name": "BaseBdev3", 00:12:07.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.390 "is_configured": false, 00:12:07.390 "data_offset": 0, 00:12:07.390 "data_size": 0 00:12:07.390 } 00:12:07.390 ] 00:12:07.390 }' 00:12:07.390 18:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.390 18:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.955 18:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:07.955 18:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.955 18:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.955 [2024-12-06 18:10:33.358338] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:07.955 [2024-12-06 18:10:33.358385] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:07.955 18:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.955 18:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:07.955 18:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.955 18:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.955 [2024-12-06 18:10:33.366335] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:07.955 [2024-12-06 18:10:33.367591] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:07.955 [2024-12-06 18:10:33.367620] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:07.955 [2024-12-06 18:10:33.367638] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:07.955 [2024-12-06 18:10:33.367649] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:07.955 [2024-12-06 18:10:33.367663] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:07.955 18:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.955 18:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:07.955 18:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.955 18:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.955 [2024-12-06 18:10:33.410796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:07.955 BaseBdev1 00:12:07.955 18:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.955 18:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:07.955 18:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:07.955 18:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:07.955 18:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:07.955 18:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:07.955 18:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:07.955 18:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:07.955 18:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.955 18:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.955 18:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.955 18:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:07.955 18:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.955 18:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.955 [ 00:12:07.955 { 00:12:07.955 "name": "BaseBdev1", 00:12:07.955 "aliases": [ 00:12:07.955 "f87f1a49-7f56-4aed-9845-50a175ec4157" 00:12:07.955 ], 00:12:07.955 "product_name": "Malloc disk", 00:12:07.955 "block_size": 512, 00:12:07.955 "num_blocks": 65536, 00:12:07.955 "uuid": "f87f1a49-7f56-4aed-9845-50a175ec4157", 00:12:07.956 "assigned_rate_limits": { 00:12:07.956 "rw_ios_per_sec": 0, 00:12:07.956 "rw_mbytes_per_sec": 0, 00:12:07.956 "r_mbytes_per_sec": 0, 00:12:07.956 "w_mbytes_per_sec": 0 00:12:07.956 }, 00:12:07.956 "claimed": true, 00:12:07.956 "claim_type": "exclusive_write", 00:12:07.956 "zoned": false, 00:12:07.956 "supported_io_types": { 00:12:07.956 "read": true, 00:12:07.956 "write": true, 00:12:07.956 "unmap": true, 00:12:07.956 "flush": true, 00:12:07.956 "reset": true, 00:12:07.956 "nvme_admin": false, 00:12:07.956 "nvme_io": false, 00:12:07.956 "nvme_io_md": false, 00:12:07.956 "write_zeroes": true, 00:12:07.956 "zcopy": true, 00:12:07.956 "get_zone_info": false, 00:12:07.956 "zone_management": false, 00:12:07.956 "zone_append": false, 00:12:07.956 "compare": false, 00:12:07.956 "compare_and_write": false, 00:12:07.956 "abort": true, 00:12:07.956 "seek_hole": false, 00:12:07.956 "seek_data": false, 00:12:07.956 "copy": true, 00:12:07.956 "nvme_iov_md": false 00:12:07.956 }, 00:12:07.956 "memory_domains": [ 00:12:07.956 { 00:12:07.956 "dma_device_id": "system", 00:12:07.956 "dma_device_type": 1 00:12:07.956 }, 00:12:07.956 { 00:12:07.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.956 "dma_device_type": 2 00:12:07.956 } 00:12:07.956 ], 00:12:07.956 "driver_specific": {} 00:12:07.956 } 00:12:07.956 ] 00:12:07.956 18:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.956 18:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:07.956 18:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:07.956 18:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.956 18:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.956 18:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:07.956 18:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:07.956 18:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:07.956 18:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.956 18:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.956 18:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.956 18:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.956 18:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.956 18:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.956 18:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.956 18:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.956 18:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.238 18:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.238 "name": "Existed_Raid", 00:12:08.238 "uuid": "68e0d78c-6283-40de-98d8-1088a1962862", 00:12:08.238 "strip_size_kb": 64, 00:12:08.238 "state": "configuring", 00:12:08.238 "raid_level": "concat", 00:12:08.238 "superblock": true, 00:12:08.238 "num_base_bdevs": 3, 00:12:08.238 "num_base_bdevs_discovered": 1, 00:12:08.238 "num_base_bdevs_operational": 3, 00:12:08.238 "base_bdevs_list": [ 00:12:08.238 { 00:12:08.238 "name": "BaseBdev1", 00:12:08.238 "uuid": "f87f1a49-7f56-4aed-9845-50a175ec4157", 00:12:08.238 "is_configured": true, 00:12:08.238 "data_offset": 2048, 00:12:08.238 "data_size": 63488 00:12:08.238 }, 00:12:08.238 { 00:12:08.238 "name": "BaseBdev2", 00:12:08.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.238 "is_configured": false, 00:12:08.238 "data_offset": 0, 00:12:08.238 "data_size": 0 00:12:08.238 }, 00:12:08.238 { 00:12:08.238 "name": "BaseBdev3", 00:12:08.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.238 "is_configured": false, 00:12:08.238 "data_offset": 0, 00:12:08.238 "data_size": 0 00:12:08.238 } 00:12:08.238 ] 00:12:08.238 }' 00:12:08.238 18:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.238 18:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.515 18:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:08.515 18:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.515 18:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.515 [2024-12-06 18:10:33.950985] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:08.515 [2024-12-06 18:10:33.951047] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:08.515 18:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.515 18:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:08.515 18:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.515 18:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.515 [2024-12-06 18:10:33.959043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:08.515 [2024-12-06 18:10:33.961542] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:08.515 [2024-12-06 18:10:33.961714] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:08.515 [2024-12-06 18:10:33.961848] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:08.515 [2024-12-06 18:10:33.961978] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:08.515 18:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.515 18:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:08.515 18:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:08.515 18:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:08.515 18:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.515 18:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.515 18:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:08.515 18:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:08.515 18:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:08.515 18:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.515 18:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.515 18:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.515 18:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.515 18:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.515 18:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.515 18:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.515 18:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.515 18:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.515 18:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.515 "name": "Existed_Raid", 00:12:08.515 "uuid": "37ef1421-359e-46e8-a219-63c88d979ee9", 00:12:08.515 "strip_size_kb": 64, 00:12:08.515 "state": "configuring", 00:12:08.515 "raid_level": "concat", 00:12:08.515 "superblock": true, 00:12:08.515 "num_base_bdevs": 3, 00:12:08.515 "num_base_bdevs_discovered": 1, 00:12:08.515 "num_base_bdevs_operational": 3, 00:12:08.515 "base_bdevs_list": [ 00:12:08.515 { 00:12:08.515 "name": "BaseBdev1", 00:12:08.515 "uuid": "f87f1a49-7f56-4aed-9845-50a175ec4157", 00:12:08.515 "is_configured": true, 00:12:08.515 "data_offset": 2048, 00:12:08.515 "data_size": 63488 00:12:08.516 }, 00:12:08.516 { 00:12:08.516 "name": "BaseBdev2", 00:12:08.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.516 "is_configured": false, 00:12:08.516 "data_offset": 0, 00:12:08.516 "data_size": 0 00:12:08.516 }, 00:12:08.516 { 00:12:08.516 "name": "BaseBdev3", 00:12:08.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.516 "is_configured": false, 00:12:08.516 "data_offset": 0, 00:12:08.516 "data_size": 0 00:12:08.516 } 00:12:08.516 ] 00:12:08.516 }' 00:12:08.516 18:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.516 18:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.084 18:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:09.084 18:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.084 18:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.085 [2024-12-06 18:10:34.481003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:09.085 BaseBdev2 00:12:09.085 18:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.085 18:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:09.085 18:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:09.085 18:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:09.085 18:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:09.085 18:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:09.085 18:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:09.085 18:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:09.085 18:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.085 18:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.085 18:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.085 18:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:09.085 18:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.085 18:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.085 [ 00:12:09.085 { 00:12:09.085 "name": "BaseBdev2", 00:12:09.085 "aliases": [ 00:12:09.085 "6c178c98-5a8e-458e-bffc-1d8efd294964" 00:12:09.085 ], 00:12:09.085 "product_name": "Malloc disk", 00:12:09.085 "block_size": 512, 00:12:09.085 "num_blocks": 65536, 00:12:09.085 "uuid": "6c178c98-5a8e-458e-bffc-1d8efd294964", 00:12:09.085 "assigned_rate_limits": { 00:12:09.085 "rw_ios_per_sec": 0, 00:12:09.085 "rw_mbytes_per_sec": 0, 00:12:09.085 "r_mbytes_per_sec": 0, 00:12:09.085 "w_mbytes_per_sec": 0 00:12:09.085 }, 00:12:09.085 "claimed": true, 00:12:09.085 "claim_type": "exclusive_write", 00:12:09.085 "zoned": false, 00:12:09.085 "supported_io_types": { 00:12:09.085 "read": true, 00:12:09.085 "write": true, 00:12:09.085 "unmap": true, 00:12:09.085 "flush": true, 00:12:09.085 "reset": true, 00:12:09.085 "nvme_admin": false, 00:12:09.085 "nvme_io": false, 00:12:09.085 "nvme_io_md": false, 00:12:09.085 "write_zeroes": true, 00:12:09.085 "zcopy": true, 00:12:09.085 "get_zone_info": false, 00:12:09.085 "zone_management": false, 00:12:09.085 "zone_append": false, 00:12:09.085 "compare": false, 00:12:09.085 "compare_and_write": false, 00:12:09.085 "abort": true, 00:12:09.085 "seek_hole": false, 00:12:09.085 "seek_data": false, 00:12:09.085 "copy": true, 00:12:09.085 "nvme_iov_md": false 00:12:09.085 }, 00:12:09.085 "memory_domains": [ 00:12:09.085 { 00:12:09.085 "dma_device_id": "system", 00:12:09.085 "dma_device_type": 1 00:12:09.085 }, 00:12:09.085 { 00:12:09.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.085 "dma_device_type": 2 00:12:09.085 } 00:12:09.085 ], 00:12:09.085 "driver_specific": {} 00:12:09.085 } 00:12:09.085 ] 00:12:09.085 18:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.085 18:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:09.085 18:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:09.085 18:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:09.085 18:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:09.085 18:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.085 18:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:09.085 18:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:09.085 18:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:09.085 18:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:09.085 18:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.085 18:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.085 18:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.085 18:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.085 18:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.085 18:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.085 18:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.085 18:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.085 18:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.085 18:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.085 "name": "Existed_Raid", 00:12:09.085 "uuid": "37ef1421-359e-46e8-a219-63c88d979ee9", 00:12:09.085 "strip_size_kb": 64, 00:12:09.085 "state": "configuring", 00:12:09.085 "raid_level": "concat", 00:12:09.085 "superblock": true, 00:12:09.085 "num_base_bdevs": 3, 00:12:09.085 "num_base_bdevs_discovered": 2, 00:12:09.085 "num_base_bdevs_operational": 3, 00:12:09.085 "base_bdevs_list": [ 00:12:09.085 { 00:12:09.085 "name": "BaseBdev1", 00:12:09.085 "uuid": "f87f1a49-7f56-4aed-9845-50a175ec4157", 00:12:09.085 "is_configured": true, 00:12:09.085 "data_offset": 2048, 00:12:09.085 "data_size": 63488 00:12:09.085 }, 00:12:09.085 { 00:12:09.085 "name": "BaseBdev2", 00:12:09.085 "uuid": "6c178c98-5a8e-458e-bffc-1d8efd294964", 00:12:09.085 "is_configured": true, 00:12:09.085 "data_offset": 2048, 00:12:09.085 "data_size": 63488 00:12:09.085 }, 00:12:09.085 { 00:12:09.085 "name": "BaseBdev3", 00:12:09.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.085 "is_configured": false, 00:12:09.085 "data_offset": 0, 00:12:09.085 "data_size": 0 00:12:09.085 } 00:12:09.085 ] 00:12:09.085 }' 00:12:09.085 18:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.085 18:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.654 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:09.654 18:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.654 18:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.654 BaseBdev3 00:12:09.654 [2024-12-06 18:10:35.060061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:09.654 [2024-12-06 18:10:35.060390] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:09.654 [2024-12-06 18:10:35.060420] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:09.654 [2024-12-06 18:10:35.060750] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:09.654 [2024-12-06 18:10:35.061003] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:09.654 [2024-12-06 18:10:35.061021] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:09.654 [2024-12-06 18:10:35.061208] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:09.654 18:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.654 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:09.654 18:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:09.654 18:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:09.654 18:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:09.654 18:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:09.654 18:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:09.654 18:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:09.654 18:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.654 18:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.654 18:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.654 18:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:09.654 18:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.655 18:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.655 [ 00:12:09.655 { 00:12:09.655 "name": "BaseBdev3", 00:12:09.655 "aliases": [ 00:12:09.655 "5d57a2cd-4c99-4deb-98fb-732bba157893" 00:12:09.655 ], 00:12:09.655 "product_name": "Malloc disk", 00:12:09.655 "block_size": 512, 00:12:09.655 "num_blocks": 65536, 00:12:09.655 "uuid": "5d57a2cd-4c99-4deb-98fb-732bba157893", 00:12:09.655 "assigned_rate_limits": { 00:12:09.655 "rw_ios_per_sec": 0, 00:12:09.655 "rw_mbytes_per_sec": 0, 00:12:09.655 "r_mbytes_per_sec": 0, 00:12:09.655 "w_mbytes_per_sec": 0 00:12:09.655 }, 00:12:09.655 "claimed": true, 00:12:09.655 "claim_type": "exclusive_write", 00:12:09.655 "zoned": false, 00:12:09.655 "supported_io_types": { 00:12:09.655 "read": true, 00:12:09.655 "write": true, 00:12:09.655 "unmap": true, 00:12:09.655 "flush": true, 00:12:09.655 "reset": true, 00:12:09.655 "nvme_admin": false, 00:12:09.655 "nvme_io": false, 00:12:09.655 "nvme_io_md": false, 00:12:09.655 "write_zeroes": true, 00:12:09.655 "zcopy": true, 00:12:09.655 "get_zone_info": false, 00:12:09.655 "zone_management": false, 00:12:09.655 "zone_append": false, 00:12:09.655 "compare": false, 00:12:09.655 "compare_and_write": false, 00:12:09.655 "abort": true, 00:12:09.655 "seek_hole": false, 00:12:09.655 "seek_data": false, 00:12:09.655 "copy": true, 00:12:09.655 "nvme_iov_md": false 00:12:09.655 }, 00:12:09.655 "memory_domains": [ 00:12:09.655 { 00:12:09.655 "dma_device_id": "system", 00:12:09.655 "dma_device_type": 1 00:12:09.655 }, 00:12:09.655 { 00:12:09.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.655 "dma_device_type": 2 00:12:09.655 } 00:12:09.655 ], 00:12:09.655 "driver_specific": {} 00:12:09.655 } 00:12:09.655 ] 00:12:09.655 18:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.655 18:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:09.655 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:09.655 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:09.655 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:09.655 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.655 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:09.655 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:09.655 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:09.655 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:09.655 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.655 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.655 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.656 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.656 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.656 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.656 18:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.656 18:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.656 18:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.656 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.656 "name": "Existed_Raid", 00:12:09.656 "uuid": "37ef1421-359e-46e8-a219-63c88d979ee9", 00:12:09.656 "strip_size_kb": 64, 00:12:09.656 "state": "online", 00:12:09.656 "raid_level": "concat", 00:12:09.656 "superblock": true, 00:12:09.656 "num_base_bdevs": 3, 00:12:09.656 "num_base_bdevs_discovered": 3, 00:12:09.656 "num_base_bdevs_operational": 3, 00:12:09.656 "base_bdevs_list": [ 00:12:09.656 { 00:12:09.656 "name": "BaseBdev1", 00:12:09.656 "uuid": "f87f1a49-7f56-4aed-9845-50a175ec4157", 00:12:09.656 "is_configured": true, 00:12:09.656 "data_offset": 2048, 00:12:09.656 "data_size": 63488 00:12:09.656 }, 00:12:09.656 { 00:12:09.656 "name": "BaseBdev2", 00:12:09.656 "uuid": "6c178c98-5a8e-458e-bffc-1d8efd294964", 00:12:09.656 "is_configured": true, 00:12:09.656 "data_offset": 2048, 00:12:09.656 "data_size": 63488 00:12:09.656 }, 00:12:09.656 { 00:12:09.656 "name": "BaseBdev3", 00:12:09.656 "uuid": "5d57a2cd-4c99-4deb-98fb-732bba157893", 00:12:09.656 "is_configured": true, 00:12:09.656 "data_offset": 2048, 00:12:09.656 "data_size": 63488 00:12:09.656 } 00:12:09.656 ] 00:12:09.656 }' 00:12:09.656 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.656 18:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.226 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:10.226 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:10.226 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:10.226 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:10.226 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:10.226 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:10.226 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:10.226 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:10.226 18:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.226 18:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.226 [2024-12-06 18:10:35.616641] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:10.226 18:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.226 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:10.226 "name": "Existed_Raid", 00:12:10.226 "aliases": [ 00:12:10.226 "37ef1421-359e-46e8-a219-63c88d979ee9" 00:12:10.226 ], 00:12:10.226 "product_name": "Raid Volume", 00:12:10.226 "block_size": 512, 00:12:10.226 "num_blocks": 190464, 00:12:10.226 "uuid": "37ef1421-359e-46e8-a219-63c88d979ee9", 00:12:10.226 "assigned_rate_limits": { 00:12:10.226 "rw_ios_per_sec": 0, 00:12:10.226 "rw_mbytes_per_sec": 0, 00:12:10.226 "r_mbytes_per_sec": 0, 00:12:10.226 "w_mbytes_per_sec": 0 00:12:10.226 }, 00:12:10.226 "claimed": false, 00:12:10.226 "zoned": false, 00:12:10.226 "supported_io_types": { 00:12:10.226 "read": true, 00:12:10.226 "write": true, 00:12:10.226 "unmap": true, 00:12:10.226 "flush": true, 00:12:10.226 "reset": true, 00:12:10.226 "nvme_admin": false, 00:12:10.226 "nvme_io": false, 00:12:10.226 "nvme_io_md": false, 00:12:10.226 "write_zeroes": true, 00:12:10.226 "zcopy": false, 00:12:10.226 "get_zone_info": false, 00:12:10.226 "zone_management": false, 00:12:10.226 "zone_append": false, 00:12:10.226 "compare": false, 00:12:10.226 "compare_and_write": false, 00:12:10.226 "abort": false, 00:12:10.226 "seek_hole": false, 00:12:10.226 "seek_data": false, 00:12:10.226 "copy": false, 00:12:10.226 "nvme_iov_md": false 00:12:10.226 }, 00:12:10.226 "memory_domains": [ 00:12:10.226 { 00:12:10.226 "dma_device_id": "system", 00:12:10.226 "dma_device_type": 1 00:12:10.226 }, 00:12:10.226 { 00:12:10.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.226 "dma_device_type": 2 00:12:10.227 }, 00:12:10.227 { 00:12:10.227 "dma_device_id": "system", 00:12:10.227 "dma_device_type": 1 00:12:10.227 }, 00:12:10.227 { 00:12:10.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.227 "dma_device_type": 2 00:12:10.227 }, 00:12:10.227 { 00:12:10.227 "dma_device_id": "system", 00:12:10.227 "dma_device_type": 1 00:12:10.227 }, 00:12:10.227 { 00:12:10.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.227 "dma_device_type": 2 00:12:10.227 } 00:12:10.227 ], 00:12:10.227 "driver_specific": { 00:12:10.227 "raid": { 00:12:10.227 "uuid": "37ef1421-359e-46e8-a219-63c88d979ee9", 00:12:10.227 "strip_size_kb": 64, 00:12:10.227 "state": "online", 00:12:10.227 "raid_level": "concat", 00:12:10.227 "superblock": true, 00:12:10.227 "num_base_bdevs": 3, 00:12:10.227 "num_base_bdevs_discovered": 3, 00:12:10.227 "num_base_bdevs_operational": 3, 00:12:10.227 "base_bdevs_list": [ 00:12:10.227 { 00:12:10.227 "name": "BaseBdev1", 00:12:10.227 "uuid": "f87f1a49-7f56-4aed-9845-50a175ec4157", 00:12:10.227 "is_configured": true, 00:12:10.227 "data_offset": 2048, 00:12:10.227 "data_size": 63488 00:12:10.227 }, 00:12:10.227 { 00:12:10.227 "name": "BaseBdev2", 00:12:10.227 "uuid": "6c178c98-5a8e-458e-bffc-1d8efd294964", 00:12:10.227 "is_configured": true, 00:12:10.227 "data_offset": 2048, 00:12:10.227 "data_size": 63488 00:12:10.227 }, 00:12:10.227 { 00:12:10.227 "name": "BaseBdev3", 00:12:10.227 "uuid": "5d57a2cd-4c99-4deb-98fb-732bba157893", 00:12:10.227 "is_configured": true, 00:12:10.227 "data_offset": 2048, 00:12:10.227 "data_size": 63488 00:12:10.227 } 00:12:10.227 ] 00:12:10.227 } 00:12:10.227 } 00:12:10.227 }' 00:12:10.227 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:10.227 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:10.227 BaseBdev2 00:12:10.227 BaseBdev3' 00:12:10.227 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:10.486 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:10.486 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:10.486 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:10.486 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:10.486 18:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.486 18:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.486 18:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.486 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:10.486 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:10.486 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:10.486 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:10.486 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:10.486 18:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.486 18:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.486 18:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.486 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:10.486 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:10.486 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:10.486 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:10.486 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:10.486 18:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.486 18:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.486 18:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.486 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:10.486 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:10.486 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:10.486 18:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.486 18:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.486 [2024-12-06 18:10:35.912370] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:10.486 [2024-12-06 18:10:35.912529] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:10.486 [2024-12-06 18:10:35.912709] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:10.486 18:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.486 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:10.486 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:10.486 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:10.486 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:10.486 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:10.486 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:12:10.486 18:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:10.486 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:10.486 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:10.486 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:10.486 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:10.743 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.743 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.743 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.743 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.743 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.743 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.743 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.743 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.743 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.743 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.743 "name": "Existed_Raid", 00:12:10.743 "uuid": "37ef1421-359e-46e8-a219-63c88d979ee9", 00:12:10.743 "strip_size_kb": 64, 00:12:10.743 "state": "offline", 00:12:10.743 "raid_level": "concat", 00:12:10.743 "superblock": true, 00:12:10.743 "num_base_bdevs": 3, 00:12:10.743 "num_base_bdevs_discovered": 2, 00:12:10.743 "num_base_bdevs_operational": 2, 00:12:10.743 "base_bdevs_list": [ 00:12:10.743 { 00:12:10.743 "name": null, 00:12:10.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.743 "is_configured": false, 00:12:10.743 "data_offset": 0, 00:12:10.743 "data_size": 63488 00:12:10.743 }, 00:12:10.743 { 00:12:10.743 "name": "BaseBdev2", 00:12:10.743 "uuid": "6c178c98-5a8e-458e-bffc-1d8efd294964", 00:12:10.743 "is_configured": true, 00:12:10.743 "data_offset": 2048, 00:12:10.743 "data_size": 63488 00:12:10.743 }, 00:12:10.743 { 00:12:10.743 "name": "BaseBdev3", 00:12:10.743 "uuid": "5d57a2cd-4c99-4deb-98fb-732bba157893", 00:12:10.743 "is_configured": true, 00:12:10.743 "data_offset": 2048, 00:12:10.743 "data_size": 63488 00:12:10.743 } 00:12:10.743 ] 00:12:10.743 }' 00:12:10.743 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.743 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.002 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:11.002 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:11.002 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.002 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:11.002 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.002 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.002 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.261 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:11.261 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:11.261 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:11.261 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.261 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.261 [2024-12-06 18:10:36.562758] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:11.261 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.261 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:11.261 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:11.261 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:11.261 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.261 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.261 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.261 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.261 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:11.261 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:11.261 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:11.261 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.261 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.261 [2024-12-06 18:10:36.703961] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:11.261 [2024-12-06 18:10:36.704154] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:11.520 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.520 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:11.520 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:11.520 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.520 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.520 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.520 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:11.520 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.520 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:11.520 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:11.520 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:11.520 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:11.520 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:11.520 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:11.520 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.520 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.520 BaseBdev2 00:12:11.520 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.520 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:11.520 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:11.520 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:11.520 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:11.520 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:11.520 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:11.520 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:11.520 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.520 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.520 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.520 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:11.520 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.520 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.520 [ 00:12:11.520 { 00:12:11.520 "name": "BaseBdev2", 00:12:11.520 "aliases": [ 00:12:11.520 "c94a88f4-361d-47b1-9e2a-3c526dd5f77f" 00:12:11.520 ], 00:12:11.520 "product_name": "Malloc disk", 00:12:11.520 "block_size": 512, 00:12:11.520 "num_blocks": 65536, 00:12:11.520 "uuid": "c94a88f4-361d-47b1-9e2a-3c526dd5f77f", 00:12:11.520 "assigned_rate_limits": { 00:12:11.520 "rw_ios_per_sec": 0, 00:12:11.520 "rw_mbytes_per_sec": 0, 00:12:11.520 "r_mbytes_per_sec": 0, 00:12:11.520 "w_mbytes_per_sec": 0 00:12:11.520 }, 00:12:11.520 "claimed": false, 00:12:11.520 "zoned": false, 00:12:11.520 "supported_io_types": { 00:12:11.520 "read": true, 00:12:11.520 "write": true, 00:12:11.520 "unmap": true, 00:12:11.520 "flush": true, 00:12:11.520 "reset": true, 00:12:11.520 "nvme_admin": false, 00:12:11.520 "nvme_io": false, 00:12:11.520 "nvme_io_md": false, 00:12:11.520 "write_zeroes": true, 00:12:11.520 "zcopy": true, 00:12:11.520 "get_zone_info": false, 00:12:11.520 "zone_management": false, 00:12:11.520 "zone_append": false, 00:12:11.520 "compare": false, 00:12:11.520 "compare_and_write": false, 00:12:11.520 "abort": true, 00:12:11.520 "seek_hole": false, 00:12:11.520 "seek_data": false, 00:12:11.520 "copy": true, 00:12:11.520 "nvme_iov_md": false 00:12:11.520 }, 00:12:11.520 "memory_domains": [ 00:12:11.520 { 00:12:11.520 "dma_device_id": "system", 00:12:11.520 "dma_device_type": 1 00:12:11.520 }, 00:12:11.520 { 00:12:11.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.520 "dma_device_type": 2 00:12:11.520 } 00:12:11.520 ], 00:12:11.520 "driver_specific": {} 00:12:11.520 } 00:12:11.520 ] 00:12:11.520 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.520 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:11.520 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:11.520 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:11.520 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:11.520 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.520 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.520 BaseBdev3 00:12:11.520 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.520 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:11.520 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:11.520 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:11.520 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:11.520 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:11.520 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:11.520 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:11.520 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.520 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.520 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.521 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:11.521 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.521 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.521 [ 00:12:11.521 { 00:12:11.521 "name": "BaseBdev3", 00:12:11.521 "aliases": [ 00:12:11.521 "55dfde2c-97a1-4e7e-b725-cfe32ed0f9a0" 00:12:11.521 ], 00:12:11.521 "product_name": "Malloc disk", 00:12:11.521 "block_size": 512, 00:12:11.521 "num_blocks": 65536, 00:12:11.521 "uuid": "55dfde2c-97a1-4e7e-b725-cfe32ed0f9a0", 00:12:11.521 "assigned_rate_limits": { 00:12:11.521 "rw_ios_per_sec": 0, 00:12:11.521 "rw_mbytes_per_sec": 0, 00:12:11.521 "r_mbytes_per_sec": 0, 00:12:11.521 "w_mbytes_per_sec": 0 00:12:11.521 }, 00:12:11.521 "claimed": false, 00:12:11.521 "zoned": false, 00:12:11.521 "supported_io_types": { 00:12:11.521 "read": true, 00:12:11.521 "write": true, 00:12:11.521 "unmap": true, 00:12:11.521 "flush": true, 00:12:11.521 "reset": true, 00:12:11.521 "nvme_admin": false, 00:12:11.521 "nvme_io": false, 00:12:11.521 "nvme_io_md": false, 00:12:11.521 "write_zeroes": true, 00:12:11.521 "zcopy": true, 00:12:11.521 "get_zone_info": false, 00:12:11.521 "zone_management": false, 00:12:11.521 "zone_append": false, 00:12:11.521 "compare": false, 00:12:11.521 "compare_and_write": false, 00:12:11.521 "abort": true, 00:12:11.521 "seek_hole": false, 00:12:11.521 "seek_data": false, 00:12:11.521 "copy": true, 00:12:11.521 "nvme_iov_md": false 00:12:11.521 }, 00:12:11.521 "memory_domains": [ 00:12:11.521 { 00:12:11.521 "dma_device_id": "system", 00:12:11.521 "dma_device_type": 1 00:12:11.521 }, 00:12:11.521 { 00:12:11.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.521 "dma_device_type": 2 00:12:11.521 } 00:12:11.521 ], 00:12:11.521 "driver_specific": {} 00:12:11.521 } 00:12:11.521 ] 00:12:11.521 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.521 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:11.521 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:11.521 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:11.521 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:11.521 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.521 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.521 [2024-12-06 18:10:36.985112] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:11.521 [2024-12-06 18:10:36.985289] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:11.521 [2024-12-06 18:10:36.985421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:11.521 [2024-12-06 18:10:36.987897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:11.521 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.521 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:11.521 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:11.521 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:11.521 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:11.521 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:11.521 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:11.521 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.521 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.521 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.521 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.521 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.521 18:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.521 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.521 18:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.521 18:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.780 18:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.780 "name": "Existed_Raid", 00:12:11.780 "uuid": "f94e2d60-1a7d-4103-984b-5957423d5c84", 00:12:11.780 "strip_size_kb": 64, 00:12:11.780 "state": "configuring", 00:12:11.780 "raid_level": "concat", 00:12:11.780 "superblock": true, 00:12:11.780 "num_base_bdevs": 3, 00:12:11.780 "num_base_bdevs_discovered": 2, 00:12:11.780 "num_base_bdevs_operational": 3, 00:12:11.780 "base_bdevs_list": [ 00:12:11.780 { 00:12:11.780 "name": "BaseBdev1", 00:12:11.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.780 "is_configured": false, 00:12:11.780 "data_offset": 0, 00:12:11.780 "data_size": 0 00:12:11.780 }, 00:12:11.780 { 00:12:11.780 "name": "BaseBdev2", 00:12:11.780 "uuid": "c94a88f4-361d-47b1-9e2a-3c526dd5f77f", 00:12:11.780 "is_configured": true, 00:12:11.780 "data_offset": 2048, 00:12:11.780 "data_size": 63488 00:12:11.780 }, 00:12:11.780 { 00:12:11.780 "name": "BaseBdev3", 00:12:11.780 "uuid": "55dfde2c-97a1-4e7e-b725-cfe32ed0f9a0", 00:12:11.780 "is_configured": true, 00:12:11.780 "data_offset": 2048, 00:12:11.780 "data_size": 63488 00:12:11.780 } 00:12:11.780 ] 00:12:11.780 }' 00:12:11.780 18:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.780 18:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.039 18:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:12.039 18:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.039 18:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.039 [2024-12-06 18:10:37.509279] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:12.039 18:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.039 18:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:12.039 18:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:12.039 18:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:12.039 18:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:12.039 18:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:12.039 18:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:12.039 18:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.039 18:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.039 18:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.039 18:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.039 18:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.039 18:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.039 18:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.039 18:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.039 18:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.298 18:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.298 "name": "Existed_Raid", 00:12:12.298 "uuid": "f94e2d60-1a7d-4103-984b-5957423d5c84", 00:12:12.298 "strip_size_kb": 64, 00:12:12.298 "state": "configuring", 00:12:12.298 "raid_level": "concat", 00:12:12.298 "superblock": true, 00:12:12.298 "num_base_bdevs": 3, 00:12:12.298 "num_base_bdevs_discovered": 1, 00:12:12.298 "num_base_bdevs_operational": 3, 00:12:12.298 "base_bdevs_list": [ 00:12:12.298 { 00:12:12.298 "name": "BaseBdev1", 00:12:12.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.298 "is_configured": false, 00:12:12.298 "data_offset": 0, 00:12:12.298 "data_size": 0 00:12:12.298 }, 00:12:12.298 { 00:12:12.298 "name": null, 00:12:12.298 "uuid": "c94a88f4-361d-47b1-9e2a-3c526dd5f77f", 00:12:12.298 "is_configured": false, 00:12:12.298 "data_offset": 0, 00:12:12.298 "data_size": 63488 00:12:12.298 }, 00:12:12.298 { 00:12:12.298 "name": "BaseBdev3", 00:12:12.298 "uuid": "55dfde2c-97a1-4e7e-b725-cfe32ed0f9a0", 00:12:12.298 "is_configured": true, 00:12:12.298 "data_offset": 2048, 00:12:12.298 "data_size": 63488 00:12:12.298 } 00:12:12.298 ] 00:12:12.298 }' 00:12:12.298 18:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.298 18:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.556 18:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:12.556 18:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.556 18:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.556 18:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.556 18:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.815 18:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:12.815 18:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:12.815 18:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.815 18:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.815 [2024-12-06 18:10:38.118745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:12.815 BaseBdev1 00:12:12.815 18:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.815 18:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:12.815 18:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:12.815 18:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:12.815 18:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:12.815 18:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:12.815 18:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:12.815 18:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:12.815 18:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.815 18:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.815 18:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.815 18:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:12.815 18:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.815 18:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.815 [ 00:12:12.815 { 00:12:12.815 "name": "BaseBdev1", 00:12:12.815 "aliases": [ 00:12:12.815 "3cc111f5-238f-4719-87f3-a7802ce84b1e" 00:12:12.815 ], 00:12:12.815 "product_name": "Malloc disk", 00:12:12.815 "block_size": 512, 00:12:12.815 "num_blocks": 65536, 00:12:12.815 "uuid": "3cc111f5-238f-4719-87f3-a7802ce84b1e", 00:12:12.815 "assigned_rate_limits": { 00:12:12.815 "rw_ios_per_sec": 0, 00:12:12.815 "rw_mbytes_per_sec": 0, 00:12:12.815 "r_mbytes_per_sec": 0, 00:12:12.815 "w_mbytes_per_sec": 0 00:12:12.815 }, 00:12:12.815 "claimed": true, 00:12:12.815 "claim_type": "exclusive_write", 00:12:12.815 "zoned": false, 00:12:12.815 "supported_io_types": { 00:12:12.815 "read": true, 00:12:12.815 "write": true, 00:12:12.815 "unmap": true, 00:12:12.815 "flush": true, 00:12:12.815 "reset": true, 00:12:12.815 "nvme_admin": false, 00:12:12.815 "nvme_io": false, 00:12:12.815 "nvme_io_md": false, 00:12:12.815 "write_zeroes": true, 00:12:12.815 "zcopy": true, 00:12:12.815 "get_zone_info": false, 00:12:12.815 "zone_management": false, 00:12:12.815 "zone_append": false, 00:12:12.815 "compare": false, 00:12:12.815 "compare_and_write": false, 00:12:12.815 "abort": true, 00:12:12.815 "seek_hole": false, 00:12:12.815 "seek_data": false, 00:12:12.815 "copy": true, 00:12:12.815 "nvme_iov_md": false 00:12:12.815 }, 00:12:12.815 "memory_domains": [ 00:12:12.815 { 00:12:12.815 "dma_device_id": "system", 00:12:12.815 "dma_device_type": 1 00:12:12.815 }, 00:12:12.815 { 00:12:12.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.815 "dma_device_type": 2 00:12:12.815 } 00:12:12.815 ], 00:12:12.815 "driver_specific": {} 00:12:12.815 } 00:12:12.815 ] 00:12:12.815 18:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.815 18:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:12.815 18:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:12.815 18:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:12.815 18:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:12.815 18:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:12.815 18:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:12.815 18:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:12.815 18:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.815 18:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.815 18:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.815 18:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.815 18:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.815 18:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.815 18:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.815 18:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.815 18:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.815 18:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.815 "name": "Existed_Raid", 00:12:12.815 "uuid": "f94e2d60-1a7d-4103-984b-5957423d5c84", 00:12:12.815 "strip_size_kb": 64, 00:12:12.815 "state": "configuring", 00:12:12.815 "raid_level": "concat", 00:12:12.815 "superblock": true, 00:12:12.815 "num_base_bdevs": 3, 00:12:12.815 "num_base_bdevs_discovered": 2, 00:12:12.815 "num_base_bdevs_operational": 3, 00:12:12.815 "base_bdevs_list": [ 00:12:12.815 { 00:12:12.815 "name": "BaseBdev1", 00:12:12.815 "uuid": "3cc111f5-238f-4719-87f3-a7802ce84b1e", 00:12:12.815 "is_configured": true, 00:12:12.815 "data_offset": 2048, 00:12:12.815 "data_size": 63488 00:12:12.815 }, 00:12:12.815 { 00:12:12.815 "name": null, 00:12:12.815 "uuid": "c94a88f4-361d-47b1-9e2a-3c526dd5f77f", 00:12:12.815 "is_configured": false, 00:12:12.815 "data_offset": 0, 00:12:12.815 "data_size": 63488 00:12:12.815 }, 00:12:12.815 { 00:12:12.815 "name": "BaseBdev3", 00:12:12.815 "uuid": "55dfde2c-97a1-4e7e-b725-cfe32ed0f9a0", 00:12:12.815 "is_configured": true, 00:12:12.815 "data_offset": 2048, 00:12:12.815 "data_size": 63488 00:12:12.815 } 00:12:12.815 ] 00:12:12.815 }' 00:12:12.815 18:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.815 18:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.384 18:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.384 18:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:13.384 18:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.384 18:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.384 18:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.384 18:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:13.384 18:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:13.384 18:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.384 18:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.384 [2024-12-06 18:10:38.698987] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:13.384 18:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.384 18:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:13.384 18:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:13.384 18:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:13.384 18:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:13.384 18:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:13.384 18:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:13.384 18:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.384 18:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.384 18:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.384 18:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.384 18:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.384 18:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.384 18:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.384 18:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.384 18:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.384 18:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.384 "name": "Existed_Raid", 00:12:13.384 "uuid": "f94e2d60-1a7d-4103-984b-5957423d5c84", 00:12:13.384 "strip_size_kb": 64, 00:12:13.384 "state": "configuring", 00:12:13.384 "raid_level": "concat", 00:12:13.384 "superblock": true, 00:12:13.384 "num_base_bdevs": 3, 00:12:13.384 "num_base_bdevs_discovered": 1, 00:12:13.384 "num_base_bdevs_operational": 3, 00:12:13.384 "base_bdevs_list": [ 00:12:13.384 { 00:12:13.384 "name": "BaseBdev1", 00:12:13.384 "uuid": "3cc111f5-238f-4719-87f3-a7802ce84b1e", 00:12:13.384 "is_configured": true, 00:12:13.384 "data_offset": 2048, 00:12:13.384 "data_size": 63488 00:12:13.384 }, 00:12:13.384 { 00:12:13.384 "name": null, 00:12:13.384 "uuid": "c94a88f4-361d-47b1-9e2a-3c526dd5f77f", 00:12:13.384 "is_configured": false, 00:12:13.384 "data_offset": 0, 00:12:13.384 "data_size": 63488 00:12:13.384 }, 00:12:13.384 { 00:12:13.384 "name": null, 00:12:13.384 "uuid": "55dfde2c-97a1-4e7e-b725-cfe32ed0f9a0", 00:12:13.384 "is_configured": false, 00:12:13.384 "data_offset": 0, 00:12:13.384 "data_size": 63488 00:12:13.384 } 00:12:13.384 ] 00:12:13.384 }' 00:12:13.384 18:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.384 18:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.952 18:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.952 18:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.952 18:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.952 18:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:13.953 18:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.953 18:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:13.953 18:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:13.953 18:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.953 18:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.953 [2024-12-06 18:10:39.287210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:13.953 18:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.953 18:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:13.953 18:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:13.953 18:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:13.953 18:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:13.953 18:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:13.953 18:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:13.953 18:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.953 18:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.953 18:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.953 18:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.953 18:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.953 18:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.953 18:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.953 18:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.953 18:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.953 18:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.953 "name": "Existed_Raid", 00:12:13.953 "uuid": "f94e2d60-1a7d-4103-984b-5957423d5c84", 00:12:13.953 "strip_size_kb": 64, 00:12:13.953 "state": "configuring", 00:12:13.953 "raid_level": "concat", 00:12:13.953 "superblock": true, 00:12:13.953 "num_base_bdevs": 3, 00:12:13.953 "num_base_bdevs_discovered": 2, 00:12:13.953 "num_base_bdevs_operational": 3, 00:12:13.953 "base_bdevs_list": [ 00:12:13.953 { 00:12:13.953 "name": "BaseBdev1", 00:12:13.953 "uuid": "3cc111f5-238f-4719-87f3-a7802ce84b1e", 00:12:13.953 "is_configured": true, 00:12:13.953 "data_offset": 2048, 00:12:13.953 "data_size": 63488 00:12:13.953 }, 00:12:13.953 { 00:12:13.953 "name": null, 00:12:13.953 "uuid": "c94a88f4-361d-47b1-9e2a-3c526dd5f77f", 00:12:13.953 "is_configured": false, 00:12:13.953 "data_offset": 0, 00:12:13.953 "data_size": 63488 00:12:13.953 }, 00:12:13.953 { 00:12:13.953 "name": "BaseBdev3", 00:12:13.953 "uuid": "55dfde2c-97a1-4e7e-b725-cfe32ed0f9a0", 00:12:13.953 "is_configured": true, 00:12:13.953 "data_offset": 2048, 00:12:13.953 "data_size": 63488 00:12:13.953 } 00:12:13.953 ] 00:12:13.953 }' 00:12:13.953 18:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.953 18:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.521 18:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:14.521 18:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.521 18:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.521 18:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.521 18:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.521 18:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:14.521 18:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:14.521 18:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.521 18:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.521 [2024-12-06 18:10:39.907361] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:14.521 18:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.521 18:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:14.521 18:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:14.521 18:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:14.521 18:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:14.521 18:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:14.521 18:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:14.521 18:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.521 18:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.521 18:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.521 18:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.521 18:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.521 18:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.521 18:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.521 18:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.521 18:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.779 18:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.779 "name": "Existed_Raid", 00:12:14.779 "uuid": "f94e2d60-1a7d-4103-984b-5957423d5c84", 00:12:14.779 "strip_size_kb": 64, 00:12:14.779 "state": "configuring", 00:12:14.779 "raid_level": "concat", 00:12:14.779 "superblock": true, 00:12:14.779 "num_base_bdevs": 3, 00:12:14.779 "num_base_bdevs_discovered": 1, 00:12:14.779 "num_base_bdevs_operational": 3, 00:12:14.779 "base_bdevs_list": [ 00:12:14.779 { 00:12:14.779 "name": null, 00:12:14.779 "uuid": "3cc111f5-238f-4719-87f3-a7802ce84b1e", 00:12:14.779 "is_configured": false, 00:12:14.779 "data_offset": 0, 00:12:14.779 "data_size": 63488 00:12:14.779 }, 00:12:14.779 { 00:12:14.779 "name": null, 00:12:14.779 "uuid": "c94a88f4-361d-47b1-9e2a-3c526dd5f77f", 00:12:14.779 "is_configured": false, 00:12:14.779 "data_offset": 0, 00:12:14.779 "data_size": 63488 00:12:14.779 }, 00:12:14.779 { 00:12:14.779 "name": "BaseBdev3", 00:12:14.779 "uuid": "55dfde2c-97a1-4e7e-b725-cfe32ed0f9a0", 00:12:14.779 "is_configured": true, 00:12:14.779 "data_offset": 2048, 00:12:14.779 "data_size": 63488 00:12:14.779 } 00:12:14.779 ] 00:12:14.779 }' 00:12:14.779 18:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.779 18:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.036 18:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.036 18:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.036 18:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.036 18:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:15.036 18:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.339 18:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:15.339 18:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:15.339 18:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.339 18:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.339 [2024-12-06 18:10:40.576661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:15.339 18:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.339 18:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:15.339 18:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.339 18:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:15.339 18:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:15.339 18:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:15.339 18:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:15.339 18:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.339 18:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.339 18:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.339 18:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.339 18:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.339 18:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.339 18:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.339 18:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.339 18:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.339 18:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.339 "name": "Existed_Raid", 00:12:15.339 "uuid": "f94e2d60-1a7d-4103-984b-5957423d5c84", 00:12:15.339 "strip_size_kb": 64, 00:12:15.339 "state": "configuring", 00:12:15.339 "raid_level": "concat", 00:12:15.339 "superblock": true, 00:12:15.339 "num_base_bdevs": 3, 00:12:15.339 "num_base_bdevs_discovered": 2, 00:12:15.339 "num_base_bdevs_operational": 3, 00:12:15.339 "base_bdevs_list": [ 00:12:15.339 { 00:12:15.339 "name": null, 00:12:15.339 "uuid": "3cc111f5-238f-4719-87f3-a7802ce84b1e", 00:12:15.339 "is_configured": false, 00:12:15.339 "data_offset": 0, 00:12:15.339 "data_size": 63488 00:12:15.339 }, 00:12:15.339 { 00:12:15.339 "name": "BaseBdev2", 00:12:15.339 "uuid": "c94a88f4-361d-47b1-9e2a-3c526dd5f77f", 00:12:15.339 "is_configured": true, 00:12:15.339 "data_offset": 2048, 00:12:15.339 "data_size": 63488 00:12:15.339 }, 00:12:15.339 { 00:12:15.339 "name": "BaseBdev3", 00:12:15.339 "uuid": "55dfde2c-97a1-4e7e-b725-cfe32ed0f9a0", 00:12:15.339 "is_configured": true, 00:12:15.339 "data_offset": 2048, 00:12:15.339 "data_size": 63488 00:12:15.339 } 00:12:15.339 ] 00:12:15.339 }' 00:12:15.339 18:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.339 18:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.626 18:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:15.626 18:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.626 18:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.626 18:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.626 18:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.626 18:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:15.626 18:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.626 18:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.626 18:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.626 18:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:15.885 18:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.885 18:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3cc111f5-238f-4719-87f3-a7802ce84b1e 00:12:15.885 18:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.885 18:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.885 [2024-12-06 18:10:41.234277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:15.885 NewBaseBdev 00:12:15.885 [2024-12-06 18:10:41.234832] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:15.885 [2024-12-06 18:10:41.234864] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:15.885 [2024-12-06 18:10:41.235187] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:15.885 [2024-12-06 18:10:41.235368] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:15.885 [2024-12-06 18:10:41.235384] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:15.885 [2024-12-06 18:10:41.235546] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:15.885 18:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.885 18:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:15.885 18:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:15.885 18:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:15.885 18:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:15.885 18:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:15.885 18:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:15.885 18:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:15.885 18:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.885 18:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.885 18:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.885 18:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:15.885 18:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.885 18:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.885 [ 00:12:15.885 { 00:12:15.885 "name": "NewBaseBdev", 00:12:15.885 "aliases": [ 00:12:15.885 "3cc111f5-238f-4719-87f3-a7802ce84b1e" 00:12:15.885 ], 00:12:15.885 "product_name": "Malloc disk", 00:12:15.885 "block_size": 512, 00:12:15.885 "num_blocks": 65536, 00:12:15.885 "uuid": "3cc111f5-238f-4719-87f3-a7802ce84b1e", 00:12:15.885 "assigned_rate_limits": { 00:12:15.885 "rw_ios_per_sec": 0, 00:12:15.885 "rw_mbytes_per_sec": 0, 00:12:15.885 "r_mbytes_per_sec": 0, 00:12:15.885 "w_mbytes_per_sec": 0 00:12:15.885 }, 00:12:15.885 "claimed": true, 00:12:15.885 "claim_type": "exclusive_write", 00:12:15.885 "zoned": false, 00:12:15.885 "supported_io_types": { 00:12:15.885 "read": true, 00:12:15.885 "write": true, 00:12:15.885 "unmap": true, 00:12:15.885 "flush": true, 00:12:15.885 "reset": true, 00:12:15.885 "nvme_admin": false, 00:12:15.885 "nvme_io": false, 00:12:15.885 "nvme_io_md": false, 00:12:15.885 "write_zeroes": true, 00:12:15.885 "zcopy": true, 00:12:15.885 "get_zone_info": false, 00:12:15.885 "zone_management": false, 00:12:15.885 "zone_append": false, 00:12:15.885 "compare": false, 00:12:15.885 "compare_and_write": false, 00:12:15.885 "abort": true, 00:12:15.885 "seek_hole": false, 00:12:15.885 "seek_data": false, 00:12:15.885 "copy": true, 00:12:15.885 "nvme_iov_md": false 00:12:15.885 }, 00:12:15.885 "memory_domains": [ 00:12:15.885 { 00:12:15.885 "dma_device_id": "system", 00:12:15.885 "dma_device_type": 1 00:12:15.885 }, 00:12:15.885 { 00:12:15.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.885 "dma_device_type": 2 00:12:15.885 } 00:12:15.885 ], 00:12:15.885 "driver_specific": {} 00:12:15.885 } 00:12:15.885 ] 00:12:15.885 18:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.885 18:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:15.885 18:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:15.885 18:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.885 18:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:15.885 18:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:15.885 18:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:15.885 18:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:15.885 18:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.885 18:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.885 18:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.885 18:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.885 18:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.885 18:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.885 18:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.885 18:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.885 18:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.885 18:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.885 "name": "Existed_Raid", 00:12:15.885 "uuid": "f94e2d60-1a7d-4103-984b-5957423d5c84", 00:12:15.885 "strip_size_kb": 64, 00:12:15.885 "state": "online", 00:12:15.885 "raid_level": "concat", 00:12:15.885 "superblock": true, 00:12:15.885 "num_base_bdevs": 3, 00:12:15.885 "num_base_bdevs_discovered": 3, 00:12:15.885 "num_base_bdevs_operational": 3, 00:12:15.885 "base_bdevs_list": [ 00:12:15.885 { 00:12:15.885 "name": "NewBaseBdev", 00:12:15.885 "uuid": "3cc111f5-238f-4719-87f3-a7802ce84b1e", 00:12:15.885 "is_configured": true, 00:12:15.885 "data_offset": 2048, 00:12:15.885 "data_size": 63488 00:12:15.885 }, 00:12:15.885 { 00:12:15.885 "name": "BaseBdev2", 00:12:15.885 "uuid": "c94a88f4-361d-47b1-9e2a-3c526dd5f77f", 00:12:15.885 "is_configured": true, 00:12:15.885 "data_offset": 2048, 00:12:15.885 "data_size": 63488 00:12:15.885 }, 00:12:15.885 { 00:12:15.885 "name": "BaseBdev3", 00:12:15.885 "uuid": "55dfde2c-97a1-4e7e-b725-cfe32ed0f9a0", 00:12:15.885 "is_configured": true, 00:12:15.885 "data_offset": 2048, 00:12:15.885 "data_size": 63488 00:12:15.885 } 00:12:15.885 ] 00:12:15.885 }' 00:12:15.885 18:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.885 18:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.460 18:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:16.460 18:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:16.460 18:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:16.460 18:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:16.460 18:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:16.460 18:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:16.460 18:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:16.460 18:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.460 18:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:16.460 18:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.460 [2024-12-06 18:10:41.778862] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:16.460 18:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.460 18:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:16.460 "name": "Existed_Raid", 00:12:16.460 "aliases": [ 00:12:16.460 "f94e2d60-1a7d-4103-984b-5957423d5c84" 00:12:16.460 ], 00:12:16.460 "product_name": "Raid Volume", 00:12:16.460 "block_size": 512, 00:12:16.460 "num_blocks": 190464, 00:12:16.460 "uuid": "f94e2d60-1a7d-4103-984b-5957423d5c84", 00:12:16.460 "assigned_rate_limits": { 00:12:16.460 "rw_ios_per_sec": 0, 00:12:16.460 "rw_mbytes_per_sec": 0, 00:12:16.460 "r_mbytes_per_sec": 0, 00:12:16.460 "w_mbytes_per_sec": 0 00:12:16.460 }, 00:12:16.460 "claimed": false, 00:12:16.460 "zoned": false, 00:12:16.460 "supported_io_types": { 00:12:16.460 "read": true, 00:12:16.460 "write": true, 00:12:16.460 "unmap": true, 00:12:16.460 "flush": true, 00:12:16.460 "reset": true, 00:12:16.460 "nvme_admin": false, 00:12:16.460 "nvme_io": false, 00:12:16.460 "nvme_io_md": false, 00:12:16.460 "write_zeroes": true, 00:12:16.460 "zcopy": false, 00:12:16.460 "get_zone_info": false, 00:12:16.460 "zone_management": false, 00:12:16.460 "zone_append": false, 00:12:16.460 "compare": false, 00:12:16.460 "compare_and_write": false, 00:12:16.460 "abort": false, 00:12:16.460 "seek_hole": false, 00:12:16.460 "seek_data": false, 00:12:16.460 "copy": false, 00:12:16.460 "nvme_iov_md": false 00:12:16.460 }, 00:12:16.460 "memory_domains": [ 00:12:16.460 { 00:12:16.460 "dma_device_id": "system", 00:12:16.460 "dma_device_type": 1 00:12:16.460 }, 00:12:16.460 { 00:12:16.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.460 "dma_device_type": 2 00:12:16.460 }, 00:12:16.460 { 00:12:16.460 "dma_device_id": "system", 00:12:16.460 "dma_device_type": 1 00:12:16.460 }, 00:12:16.460 { 00:12:16.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.460 "dma_device_type": 2 00:12:16.460 }, 00:12:16.460 { 00:12:16.460 "dma_device_id": "system", 00:12:16.460 "dma_device_type": 1 00:12:16.460 }, 00:12:16.460 { 00:12:16.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.460 "dma_device_type": 2 00:12:16.460 } 00:12:16.460 ], 00:12:16.460 "driver_specific": { 00:12:16.460 "raid": { 00:12:16.460 "uuid": "f94e2d60-1a7d-4103-984b-5957423d5c84", 00:12:16.460 "strip_size_kb": 64, 00:12:16.460 "state": "online", 00:12:16.460 "raid_level": "concat", 00:12:16.460 "superblock": true, 00:12:16.460 "num_base_bdevs": 3, 00:12:16.460 "num_base_bdevs_discovered": 3, 00:12:16.460 "num_base_bdevs_operational": 3, 00:12:16.460 "base_bdevs_list": [ 00:12:16.460 { 00:12:16.460 "name": "NewBaseBdev", 00:12:16.460 "uuid": "3cc111f5-238f-4719-87f3-a7802ce84b1e", 00:12:16.460 "is_configured": true, 00:12:16.460 "data_offset": 2048, 00:12:16.461 "data_size": 63488 00:12:16.461 }, 00:12:16.461 { 00:12:16.461 "name": "BaseBdev2", 00:12:16.461 "uuid": "c94a88f4-361d-47b1-9e2a-3c526dd5f77f", 00:12:16.461 "is_configured": true, 00:12:16.461 "data_offset": 2048, 00:12:16.461 "data_size": 63488 00:12:16.461 }, 00:12:16.461 { 00:12:16.461 "name": "BaseBdev3", 00:12:16.461 "uuid": "55dfde2c-97a1-4e7e-b725-cfe32ed0f9a0", 00:12:16.461 "is_configured": true, 00:12:16.461 "data_offset": 2048, 00:12:16.461 "data_size": 63488 00:12:16.461 } 00:12:16.461 ] 00:12:16.461 } 00:12:16.461 } 00:12:16.461 }' 00:12:16.461 18:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:16.461 18:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:16.461 BaseBdev2 00:12:16.461 BaseBdev3' 00:12:16.461 18:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.461 18:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:16.461 18:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:16.461 18:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:16.461 18:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.461 18:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.461 18:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.461 18:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.461 18:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:16.461 18:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:16.461 18:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:16.461 18:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:16.461 18:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.461 18:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.461 18:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.721 18:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.721 18:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:16.721 18:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:16.721 18:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:16.721 18:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:16.721 18:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.721 18:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.721 18:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.721 18:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.721 18:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:16.721 18:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:16.721 18:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:16.721 18:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.721 18:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.721 [2024-12-06 18:10:42.082540] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:16.721 [2024-12-06 18:10:42.082696] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:16.721 [2024-12-06 18:10:42.082845] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:16.721 [2024-12-06 18:10:42.082921] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:16.721 [2024-12-06 18:10:42.082943] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:16.721 18:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.721 18:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66322 00:12:16.721 18:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66322 ']' 00:12:16.721 18:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66322 00:12:16.721 18:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:16.721 18:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:16.721 18:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66322 00:12:16.721 18:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:16.721 18:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:16.721 killing process with pid 66322 00:12:16.721 18:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66322' 00:12:16.721 18:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66322 00:12:16.721 [2024-12-06 18:10:42.120679] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:16.721 18:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66322 00:12:16.981 [2024-12-06 18:10:42.388939] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:17.918 ************************************ 00:12:17.918 END TEST raid_state_function_test_sb 00:12:17.918 ************************************ 00:12:17.918 18:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:17.918 00:12:17.918 real 0m11.666s 00:12:17.918 user 0m19.377s 00:12:17.918 sys 0m1.541s 00:12:17.918 18:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:17.918 18:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.177 18:10:43 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:12:18.177 18:10:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:18.177 18:10:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:18.177 18:10:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:18.177 ************************************ 00:12:18.177 START TEST raid_superblock_test 00:12:18.177 ************************************ 00:12:18.177 18:10:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:12:18.177 18:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:12:18.177 18:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:12:18.177 18:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:18.177 18:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:18.177 18:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:18.177 18:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:18.177 18:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:18.177 18:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:18.177 18:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:18.177 18:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:18.177 18:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:18.177 18:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:18.177 18:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:18.177 18:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:12:18.177 18:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:18.177 18:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:18.177 18:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66959 00:12:18.177 18:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:18.178 18:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66959 00:12:18.178 18:10:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66959 ']' 00:12:18.178 18:10:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.178 18:10:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:18.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.178 18:10:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.178 18:10:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:18.178 18:10:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.178 [2024-12-06 18:10:43.577605] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:12:18.178 [2024-12-06 18:10:43.577796] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66959 ] 00:12:18.437 [2024-12-06 18:10:43.749828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.437 [2024-12-06 18:10:43.882664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.695 [2024-12-06 18:10:44.087270] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:18.695 [2024-12-06 18:10:44.087347] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.265 malloc1 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.265 [2024-12-06 18:10:44.635474] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:19.265 [2024-12-06 18:10:44.635560] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.265 [2024-12-06 18:10:44.635592] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:19.265 [2024-12-06 18:10:44.635607] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.265 [2024-12-06 18:10:44.638380] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.265 [2024-12-06 18:10:44.638424] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:19.265 pt1 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.265 malloc2 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.265 [2024-12-06 18:10:44.688233] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:19.265 [2024-12-06 18:10:44.688455] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.265 [2024-12-06 18:10:44.688538] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:19.265 [2024-12-06 18:10:44.688647] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.265 [2024-12-06 18:10:44.691454] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.265 pt2 00:12:19.265 [2024-12-06 18:10:44.691639] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.265 malloc3 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.265 [2024-12-06 18:10:44.755924] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:19.265 [2024-12-06 18:10:44.756138] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.265 [2024-12-06 18:10:44.756218] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:19.265 [2024-12-06 18:10:44.756331] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.265 [2024-12-06 18:10:44.759341] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.265 pt3 00:12:19.265 [2024-12-06 18:10:44.759506] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.265 18:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.265 [2024-12-06 18:10:44.764187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:19.265 [2024-12-06 18:10:44.766809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:19.265 [2024-12-06 18:10:44.767036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:19.265 [2024-12-06 18:10:44.767374] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:19.265 [2024-12-06 18:10:44.767405] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:19.266 [2024-12-06 18:10:44.767740] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:19.266 [2024-12-06 18:10:44.767961] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:19.266 [2024-12-06 18:10:44.767979] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:19.266 [2024-12-06 18:10:44.768222] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:19.266 18:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.266 18:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:12:19.266 18:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:19.266 18:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:19.266 18:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:19.266 18:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:19.266 18:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:19.266 18:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.266 18:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.266 18:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.266 18:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.266 18:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.266 18:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.266 18:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.266 18:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.554 18:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.554 18:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.554 "name": "raid_bdev1", 00:12:19.554 "uuid": "3a7dcaac-274e-4db4-9c6d-b231ab5cbc07", 00:12:19.554 "strip_size_kb": 64, 00:12:19.554 "state": "online", 00:12:19.554 "raid_level": "concat", 00:12:19.554 "superblock": true, 00:12:19.554 "num_base_bdevs": 3, 00:12:19.554 "num_base_bdevs_discovered": 3, 00:12:19.554 "num_base_bdevs_operational": 3, 00:12:19.554 "base_bdevs_list": [ 00:12:19.554 { 00:12:19.554 "name": "pt1", 00:12:19.554 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:19.554 "is_configured": true, 00:12:19.554 "data_offset": 2048, 00:12:19.554 "data_size": 63488 00:12:19.554 }, 00:12:19.554 { 00:12:19.554 "name": "pt2", 00:12:19.554 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:19.554 "is_configured": true, 00:12:19.554 "data_offset": 2048, 00:12:19.554 "data_size": 63488 00:12:19.554 }, 00:12:19.554 { 00:12:19.554 "name": "pt3", 00:12:19.554 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:19.554 "is_configured": true, 00:12:19.554 "data_offset": 2048, 00:12:19.554 "data_size": 63488 00:12:19.554 } 00:12:19.554 ] 00:12:19.554 }' 00:12:19.554 18:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.554 18:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.812 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:19.812 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:19.812 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:19.812 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:19.812 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:19.812 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:19.812 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:19.812 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:19.812 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.812 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.812 [2024-12-06 18:10:45.248673] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:19.812 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.812 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:19.812 "name": "raid_bdev1", 00:12:19.812 "aliases": [ 00:12:19.812 "3a7dcaac-274e-4db4-9c6d-b231ab5cbc07" 00:12:19.812 ], 00:12:19.812 "product_name": "Raid Volume", 00:12:19.812 "block_size": 512, 00:12:19.812 "num_blocks": 190464, 00:12:19.812 "uuid": "3a7dcaac-274e-4db4-9c6d-b231ab5cbc07", 00:12:19.812 "assigned_rate_limits": { 00:12:19.812 "rw_ios_per_sec": 0, 00:12:19.812 "rw_mbytes_per_sec": 0, 00:12:19.812 "r_mbytes_per_sec": 0, 00:12:19.812 "w_mbytes_per_sec": 0 00:12:19.812 }, 00:12:19.812 "claimed": false, 00:12:19.812 "zoned": false, 00:12:19.812 "supported_io_types": { 00:12:19.812 "read": true, 00:12:19.812 "write": true, 00:12:19.812 "unmap": true, 00:12:19.812 "flush": true, 00:12:19.812 "reset": true, 00:12:19.812 "nvme_admin": false, 00:12:19.812 "nvme_io": false, 00:12:19.812 "nvme_io_md": false, 00:12:19.812 "write_zeroes": true, 00:12:19.812 "zcopy": false, 00:12:19.812 "get_zone_info": false, 00:12:19.812 "zone_management": false, 00:12:19.812 "zone_append": false, 00:12:19.812 "compare": false, 00:12:19.812 "compare_and_write": false, 00:12:19.812 "abort": false, 00:12:19.812 "seek_hole": false, 00:12:19.812 "seek_data": false, 00:12:19.812 "copy": false, 00:12:19.812 "nvme_iov_md": false 00:12:19.812 }, 00:12:19.812 "memory_domains": [ 00:12:19.812 { 00:12:19.812 "dma_device_id": "system", 00:12:19.812 "dma_device_type": 1 00:12:19.812 }, 00:12:19.812 { 00:12:19.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.812 "dma_device_type": 2 00:12:19.812 }, 00:12:19.812 { 00:12:19.812 "dma_device_id": "system", 00:12:19.812 "dma_device_type": 1 00:12:19.812 }, 00:12:19.812 { 00:12:19.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.812 "dma_device_type": 2 00:12:19.812 }, 00:12:19.812 { 00:12:19.812 "dma_device_id": "system", 00:12:19.812 "dma_device_type": 1 00:12:19.812 }, 00:12:19.812 { 00:12:19.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.812 "dma_device_type": 2 00:12:19.812 } 00:12:19.812 ], 00:12:19.812 "driver_specific": { 00:12:19.812 "raid": { 00:12:19.812 "uuid": "3a7dcaac-274e-4db4-9c6d-b231ab5cbc07", 00:12:19.812 "strip_size_kb": 64, 00:12:19.812 "state": "online", 00:12:19.812 "raid_level": "concat", 00:12:19.812 "superblock": true, 00:12:19.812 "num_base_bdevs": 3, 00:12:19.812 "num_base_bdevs_discovered": 3, 00:12:19.812 "num_base_bdevs_operational": 3, 00:12:19.812 "base_bdevs_list": [ 00:12:19.812 { 00:12:19.812 "name": "pt1", 00:12:19.812 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:19.812 "is_configured": true, 00:12:19.812 "data_offset": 2048, 00:12:19.812 "data_size": 63488 00:12:19.812 }, 00:12:19.812 { 00:12:19.812 "name": "pt2", 00:12:19.812 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:19.812 "is_configured": true, 00:12:19.812 "data_offset": 2048, 00:12:19.812 "data_size": 63488 00:12:19.812 }, 00:12:19.812 { 00:12:19.812 "name": "pt3", 00:12:19.812 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:19.812 "is_configured": true, 00:12:19.812 "data_offset": 2048, 00:12:19.812 "data_size": 63488 00:12:19.812 } 00:12:19.812 ] 00:12:19.812 } 00:12:19.812 } 00:12:19.812 }' 00:12:19.812 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:20.070 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:20.070 pt2 00:12:20.070 pt3' 00:12:20.070 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:20.070 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:20.070 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:20.070 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:20.070 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.070 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:20.070 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.070 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.070 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:20.070 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:20.070 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:20.070 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:20.070 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.070 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.070 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:20.070 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.070 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:20.070 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:20.070 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:20.070 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:20.070 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.070 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.070 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:20.070 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.070 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:20.070 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:20.070 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:20.070 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:20.070 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.070 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.070 [2024-12-06 18:10:45.568736] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:20.070 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.329 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3a7dcaac-274e-4db4-9c6d-b231ab5cbc07 00:12:20.329 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3a7dcaac-274e-4db4-9c6d-b231ab5cbc07 ']' 00:12:20.329 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:20.329 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.329 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.329 [2024-12-06 18:10:45.624431] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:20.329 [2024-12-06 18:10:45.624632] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:20.329 [2024-12-06 18:10:45.624885] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:20.329 [2024-12-06 18:10:45.624982] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:20.329 [2024-12-06 18:10:45.625001] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:20.329 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.329 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.329 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:20.329 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.329 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.329 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.329 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:20.329 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:20.329 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:20.329 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:20.329 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.329 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.329 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.329 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:20.329 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:20.329 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.329 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.329 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.329 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:20.329 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:20.329 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.329 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.329 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.329 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:20.329 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:20.329 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.329 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.329 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.329 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:20.329 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:20.329 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:20.329 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:20.329 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:20.330 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:20.330 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:20.330 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:20.330 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:20.330 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.330 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.330 [2024-12-06 18:10:45.768509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:20.330 [2024-12-06 18:10:45.771220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:20.330 [2024-12-06 18:10:45.771290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:20.330 [2024-12-06 18:10:45.771378] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:20.330 [2024-12-06 18:10:45.771469] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:20.330 [2024-12-06 18:10:45.771502] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:20.330 [2024-12-06 18:10:45.771529] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:20.330 [2024-12-06 18:10:45.771543] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:20.330 request: 00:12:20.330 { 00:12:20.330 "name": "raid_bdev1", 00:12:20.330 "raid_level": "concat", 00:12:20.330 "base_bdevs": [ 00:12:20.330 "malloc1", 00:12:20.330 "malloc2", 00:12:20.330 "malloc3" 00:12:20.330 ], 00:12:20.330 "strip_size_kb": 64, 00:12:20.330 "superblock": false, 00:12:20.330 "method": "bdev_raid_create", 00:12:20.330 "req_id": 1 00:12:20.330 } 00:12:20.330 Got JSON-RPC error response 00:12:20.330 response: 00:12:20.330 { 00:12:20.330 "code": -17, 00:12:20.330 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:20.330 } 00:12:20.330 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:20.330 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:20.330 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:20.330 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:20.330 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:20.330 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.330 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:20.330 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.330 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.330 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.330 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:20.330 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:20.330 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:20.330 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.330 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.330 [2024-12-06 18:10:45.832461] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:20.330 [2024-12-06 18:10:45.832676] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.330 [2024-12-06 18:10:45.832754] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:20.330 [2024-12-06 18:10:45.832978] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.330 [2024-12-06 18:10:45.835881] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.330 [2024-12-06 18:10:45.836050] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:20.330 [2024-12-06 18:10:45.836263] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:20.330 [2024-12-06 18:10:45.836458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:20.330 pt1 00:12:20.330 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.330 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:12:20.330 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:20.330 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:20.330 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:20.330 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:20.330 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:20.330 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.330 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.330 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.330 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.330 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.330 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.330 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.330 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.588 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.589 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.589 "name": "raid_bdev1", 00:12:20.589 "uuid": "3a7dcaac-274e-4db4-9c6d-b231ab5cbc07", 00:12:20.589 "strip_size_kb": 64, 00:12:20.589 "state": "configuring", 00:12:20.589 "raid_level": "concat", 00:12:20.589 "superblock": true, 00:12:20.589 "num_base_bdevs": 3, 00:12:20.589 "num_base_bdevs_discovered": 1, 00:12:20.589 "num_base_bdevs_operational": 3, 00:12:20.589 "base_bdevs_list": [ 00:12:20.589 { 00:12:20.589 "name": "pt1", 00:12:20.589 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:20.589 "is_configured": true, 00:12:20.589 "data_offset": 2048, 00:12:20.589 "data_size": 63488 00:12:20.589 }, 00:12:20.589 { 00:12:20.589 "name": null, 00:12:20.589 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:20.589 "is_configured": false, 00:12:20.589 "data_offset": 2048, 00:12:20.589 "data_size": 63488 00:12:20.589 }, 00:12:20.589 { 00:12:20.589 "name": null, 00:12:20.589 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:20.589 "is_configured": false, 00:12:20.589 "data_offset": 2048, 00:12:20.589 "data_size": 63488 00:12:20.589 } 00:12:20.589 ] 00:12:20.589 }' 00:12:20.589 18:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.589 18:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.847 18:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:12:20.847 18:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:20.847 18:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.847 18:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.847 [2024-12-06 18:10:46.352987] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:20.847 [2024-12-06 18:10:46.353208] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.847 [2024-12-06 18:10:46.353293] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:20.847 [2024-12-06 18:10:46.353315] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.847 [2024-12-06 18:10:46.353876] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.847 [2024-12-06 18:10:46.353912] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:20.847 [2024-12-06 18:10:46.354021] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:20.847 [2024-12-06 18:10:46.354067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:20.847 pt2 00:12:20.847 18:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.847 18:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:20.847 18:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.847 18:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.847 [2024-12-06 18:10:46.360957] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:20.847 18:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.847 18:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:12:20.847 18:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:21.106 18:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:21.106 18:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:21.106 18:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:21.106 18:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:21.106 18:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.106 18:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.106 18:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.106 18:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.106 18:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.106 18:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.106 18:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.106 18:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.106 18:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.106 18:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.106 "name": "raid_bdev1", 00:12:21.106 "uuid": "3a7dcaac-274e-4db4-9c6d-b231ab5cbc07", 00:12:21.106 "strip_size_kb": 64, 00:12:21.106 "state": "configuring", 00:12:21.106 "raid_level": "concat", 00:12:21.106 "superblock": true, 00:12:21.106 "num_base_bdevs": 3, 00:12:21.106 "num_base_bdevs_discovered": 1, 00:12:21.106 "num_base_bdevs_operational": 3, 00:12:21.106 "base_bdevs_list": [ 00:12:21.106 { 00:12:21.106 "name": "pt1", 00:12:21.106 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:21.106 "is_configured": true, 00:12:21.106 "data_offset": 2048, 00:12:21.106 "data_size": 63488 00:12:21.106 }, 00:12:21.106 { 00:12:21.106 "name": null, 00:12:21.106 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:21.106 "is_configured": false, 00:12:21.106 "data_offset": 0, 00:12:21.106 "data_size": 63488 00:12:21.106 }, 00:12:21.106 { 00:12:21.106 "name": null, 00:12:21.106 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:21.106 "is_configured": false, 00:12:21.106 "data_offset": 2048, 00:12:21.106 "data_size": 63488 00:12:21.106 } 00:12:21.106 ] 00:12:21.106 }' 00:12:21.106 18:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.106 18:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.736 18:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:21.736 18:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:21.736 18:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:21.736 18:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.736 18:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.736 [2024-12-06 18:10:46.941532] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:21.736 [2024-12-06 18:10:46.941782] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.736 [2024-12-06 18:10:46.941821] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:12:21.736 [2024-12-06 18:10:46.941849] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.736 [2024-12-06 18:10:46.942454] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.736 [2024-12-06 18:10:46.942491] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:21.736 [2024-12-06 18:10:46.942608] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:21.737 [2024-12-06 18:10:46.942652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:21.737 pt2 00:12:21.737 18:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.737 18:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:21.737 18:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:21.737 18:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:21.737 18:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.737 18:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.737 [2024-12-06 18:10:46.949501] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:21.737 [2024-12-06 18:10:46.949572] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.737 [2024-12-06 18:10:46.949609] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:21.737 [2024-12-06 18:10:46.949625] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.737 [2024-12-06 18:10:46.950073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.737 [2024-12-06 18:10:46.950123] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:21.737 [2024-12-06 18:10:46.950198] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:21.737 [2024-12-06 18:10:46.950230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:21.737 [2024-12-06 18:10:46.950373] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:21.737 [2024-12-06 18:10:46.950407] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:21.737 [2024-12-06 18:10:46.950726] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:21.737 [2024-12-06 18:10:46.950931] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:21.737 [2024-12-06 18:10:46.950953] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:21.737 [2024-12-06 18:10:46.951117] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:21.737 pt3 00:12:21.737 18:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.737 18:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:21.737 18:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:21.737 18:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:12:21.737 18:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:21.737 18:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:21.737 18:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:21.737 18:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:21.737 18:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:21.737 18:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.737 18:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.737 18:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.737 18:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.737 18:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.737 18:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.737 18:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.737 18:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.737 18:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.737 18:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.737 "name": "raid_bdev1", 00:12:21.737 "uuid": "3a7dcaac-274e-4db4-9c6d-b231ab5cbc07", 00:12:21.737 "strip_size_kb": 64, 00:12:21.737 "state": "online", 00:12:21.737 "raid_level": "concat", 00:12:21.737 "superblock": true, 00:12:21.737 "num_base_bdevs": 3, 00:12:21.737 "num_base_bdevs_discovered": 3, 00:12:21.737 "num_base_bdevs_operational": 3, 00:12:21.737 "base_bdevs_list": [ 00:12:21.737 { 00:12:21.737 "name": "pt1", 00:12:21.737 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:21.737 "is_configured": true, 00:12:21.737 "data_offset": 2048, 00:12:21.737 "data_size": 63488 00:12:21.737 }, 00:12:21.737 { 00:12:21.737 "name": "pt2", 00:12:21.737 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:21.737 "is_configured": true, 00:12:21.737 "data_offset": 2048, 00:12:21.737 "data_size": 63488 00:12:21.737 }, 00:12:21.737 { 00:12:21.737 "name": "pt3", 00:12:21.737 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:21.737 "is_configured": true, 00:12:21.737 "data_offset": 2048, 00:12:21.737 "data_size": 63488 00:12:21.737 } 00:12:21.737 ] 00:12:21.737 }' 00:12:21.737 18:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.737 18:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.995 18:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:21.995 18:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:21.995 18:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:21.995 18:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:21.995 18:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:21.995 18:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:21.995 18:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:21.995 18:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.995 18:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.995 18:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:21.995 [2024-12-06 18:10:47.462143] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:21.995 18:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.254 18:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:22.254 "name": "raid_bdev1", 00:12:22.254 "aliases": [ 00:12:22.254 "3a7dcaac-274e-4db4-9c6d-b231ab5cbc07" 00:12:22.254 ], 00:12:22.254 "product_name": "Raid Volume", 00:12:22.254 "block_size": 512, 00:12:22.254 "num_blocks": 190464, 00:12:22.254 "uuid": "3a7dcaac-274e-4db4-9c6d-b231ab5cbc07", 00:12:22.254 "assigned_rate_limits": { 00:12:22.254 "rw_ios_per_sec": 0, 00:12:22.254 "rw_mbytes_per_sec": 0, 00:12:22.254 "r_mbytes_per_sec": 0, 00:12:22.254 "w_mbytes_per_sec": 0 00:12:22.254 }, 00:12:22.254 "claimed": false, 00:12:22.254 "zoned": false, 00:12:22.254 "supported_io_types": { 00:12:22.254 "read": true, 00:12:22.254 "write": true, 00:12:22.254 "unmap": true, 00:12:22.254 "flush": true, 00:12:22.254 "reset": true, 00:12:22.254 "nvme_admin": false, 00:12:22.254 "nvme_io": false, 00:12:22.254 "nvme_io_md": false, 00:12:22.254 "write_zeroes": true, 00:12:22.254 "zcopy": false, 00:12:22.254 "get_zone_info": false, 00:12:22.254 "zone_management": false, 00:12:22.254 "zone_append": false, 00:12:22.254 "compare": false, 00:12:22.254 "compare_and_write": false, 00:12:22.254 "abort": false, 00:12:22.254 "seek_hole": false, 00:12:22.254 "seek_data": false, 00:12:22.254 "copy": false, 00:12:22.254 "nvme_iov_md": false 00:12:22.254 }, 00:12:22.254 "memory_domains": [ 00:12:22.254 { 00:12:22.254 "dma_device_id": "system", 00:12:22.254 "dma_device_type": 1 00:12:22.254 }, 00:12:22.254 { 00:12:22.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.254 "dma_device_type": 2 00:12:22.254 }, 00:12:22.254 { 00:12:22.254 "dma_device_id": "system", 00:12:22.254 "dma_device_type": 1 00:12:22.254 }, 00:12:22.254 { 00:12:22.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.254 "dma_device_type": 2 00:12:22.254 }, 00:12:22.254 { 00:12:22.254 "dma_device_id": "system", 00:12:22.254 "dma_device_type": 1 00:12:22.254 }, 00:12:22.254 { 00:12:22.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.254 "dma_device_type": 2 00:12:22.254 } 00:12:22.254 ], 00:12:22.254 "driver_specific": { 00:12:22.254 "raid": { 00:12:22.254 "uuid": "3a7dcaac-274e-4db4-9c6d-b231ab5cbc07", 00:12:22.254 "strip_size_kb": 64, 00:12:22.254 "state": "online", 00:12:22.254 "raid_level": "concat", 00:12:22.254 "superblock": true, 00:12:22.254 "num_base_bdevs": 3, 00:12:22.254 "num_base_bdevs_discovered": 3, 00:12:22.254 "num_base_bdevs_operational": 3, 00:12:22.254 "base_bdevs_list": [ 00:12:22.254 { 00:12:22.254 "name": "pt1", 00:12:22.254 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:22.254 "is_configured": true, 00:12:22.254 "data_offset": 2048, 00:12:22.254 "data_size": 63488 00:12:22.254 }, 00:12:22.254 { 00:12:22.254 "name": "pt2", 00:12:22.254 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:22.254 "is_configured": true, 00:12:22.254 "data_offset": 2048, 00:12:22.254 "data_size": 63488 00:12:22.254 }, 00:12:22.254 { 00:12:22.254 "name": "pt3", 00:12:22.254 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:22.254 "is_configured": true, 00:12:22.254 "data_offset": 2048, 00:12:22.254 "data_size": 63488 00:12:22.254 } 00:12:22.254 ] 00:12:22.254 } 00:12:22.254 } 00:12:22.254 }' 00:12:22.254 18:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:22.254 18:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:22.254 pt2 00:12:22.254 pt3' 00:12:22.254 18:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.254 18:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:22.254 18:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.254 18:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:22.254 18:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.254 18:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.254 18:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.254 18:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.254 18:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.254 18:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.254 18:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.254 18:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:22.254 18:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.254 18:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.254 18:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.254 18:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.254 18:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.254 18:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.254 18:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.254 18:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:22.254 18:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.254 18:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.254 18:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.254 18:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.513 18:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.513 18:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.513 18:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:22.513 18:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:22.513 18:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.513 18:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.513 [2024-12-06 18:10:47.790094] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:22.513 18:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.513 18:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3a7dcaac-274e-4db4-9c6d-b231ab5cbc07 '!=' 3a7dcaac-274e-4db4-9c6d-b231ab5cbc07 ']' 00:12:22.513 18:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:12:22.513 18:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:22.513 18:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:22.513 18:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66959 00:12:22.513 18:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66959 ']' 00:12:22.513 18:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66959 00:12:22.513 18:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:22.513 18:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:22.513 18:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66959 00:12:22.513 killing process with pid 66959 00:12:22.513 18:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:22.513 18:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:22.513 18:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66959' 00:12:22.513 18:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66959 00:12:22.513 [2024-12-06 18:10:47.868005] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:22.513 18:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66959 00:12:22.513 [2024-12-06 18:10:47.868105] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:22.513 [2024-12-06 18:10:47.868185] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:22.513 [2024-12-06 18:10:47.868205] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:22.771 [2024-12-06 18:10:48.139448] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:23.705 ************************************ 00:12:23.705 END TEST raid_superblock_test 00:12:23.705 ************************************ 00:12:23.705 18:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:23.705 00:12:23.705 real 0m5.718s 00:12:23.705 user 0m8.653s 00:12:23.705 sys 0m0.801s 00:12:23.705 18:10:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:23.705 18:10:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.963 18:10:49 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:12:23.963 18:10:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:23.963 18:10:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.963 18:10:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:23.963 ************************************ 00:12:23.963 START TEST raid_read_error_test 00:12:23.963 ************************************ 00:12:23.963 18:10:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:12:23.963 18:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:23.963 18:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:12:23.963 18:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:23.963 18:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:23.963 18:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:23.963 18:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:23.963 18:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:23.964 18:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:23.964 18:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:23.964 18:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:23.964 18:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:23.964 18:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:23.964 18:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:23.964 18:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:23.964 18:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:23.964 18:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:23.964 18:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:23.964 18:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:23.964 18:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:23.964 18:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:23.964 18:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:23.964 18:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:23.964 18:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:23.964 18:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:23.964 18:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:23.964 18:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.hEwkwxc9fr 00:12:23.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.964 18:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67212 00:12:23.964 18:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67212 00:12:23.964 18:10:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67212 ']' 00:12:23.964 18:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:23.964 18:10:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.964 18:10:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:23.964 18:10:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.964 18:10:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:23.964 18:10:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.964 [2024-12-06 18:10:49.360750] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:12:23.964 [2024-12-06 18:10:49.361344] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67212 ] 00:12:24.237 [2024-12-06 18:10:49.542882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.237 [2024-12-06 18:10:49.700447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.495 [2024-12-06 18:10:49.908369] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:24.495 [2024-12-06 18:10:49.908629] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.062 BaseBdev1_malloc 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.062 true 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.062 [2024-12-06 18:10:50.410384] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:25.062 [2024-12-06 18:10:50.410470] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.062 [2024-12-06 18:10:50.410500] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:25.062 [2024-12-06 18:10:50.410518] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.062 [2024-12-06 18:10:50.413262] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.062 [2024-12-06 18:10:50.413314] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:25.062 BaseBdev1 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.062 BaseBdev2_malloc 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.062 true 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.062 [2024-12-06 18:10:50.467475] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:25.062 [2024-12-06 18:10:50.467578] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.062 [2024-12-06 18:10:50.467615] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:25.062 [2024-12-06 18:10:50.467638] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.062 BaseBdev2 00:12:25.062 [2024-12-06 18:10:50.471403] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.062 [2024-12-06 18:10:50.471472] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.062 BaseBdev3_malloc 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.062 true 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.062 [2024-12-06 18:10:50.530438] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:25.062 [2024-12-06 18:10:50.530626] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.062 [2024-12-06 18:10:50.530664] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:25.062 [2024-12-06 18:10:50.530696] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.062 [2024-12-06 18:10:50.533568] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.062 [2024-12-06 18:10:50.533729] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:25.062 BaseBdev3 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.062 [2024-12-06 18:10:50.538630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:25.062 [2024-12-06 18:10:50.541459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:25.062 [2024-12-06 18:10:50.541724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:25.062 [2024-12-06 18:10:50.542036] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:25.062 [2024-12-06 18:10:50.542057] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:25.062 [2024-12-06 18:10:50.542390] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:12:25.062 [2024-12-06 18:10:50.542595] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:25.062 [2024-12-06 18:10:50.542618] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:25.062 [2024-12-06 18:10:50.542903] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.062 18:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.321 18:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.321 "name": "raid_bdev1", 00:12:25.321 "uuid": "e7193959-2f78-486a-b43e-1ab05f489029", 00:12:25.321 "strip_size_kb": 64, 00:12:25.321 "state": "online", 00:12:25.321 "raid_level": "concat", 00:12:25.321 "superblock": true, 00:12:25.321 "num_base_bdevs": 3, 00:12:25.321 "num_base_bdevs_discovered": 3, 00:12:25.321 "num_base_bdevs_operational": 3, 00:12:25.321 "base_bdevs_list": [ 00:12:25.321 { 00:12:25.321 "name": "BaseBdev1", 00:12:25.321 "uuid": "a9f91c31-e003-5386-a88d-b599a7b8ac90", 00:12:25.321 "is_configured": true, 00:12:25.321 "data_offset": 2048, 00:12:25.321 "data_size": 63488 00:12:25.321 }, 00:12:25.321 { 00:12:25.321 "name": "BaseBdev2", 00:12:25.321 "uuid": "89c9e680-a810-591b-8341-48424a306aeb", 00:12:25.321 "is_configured": true, 00:12:25.321 "data_offset": 2048, 00:12:25.321 "data_size": 63488 00:12:25.321 }, 00:12:25.321 { 00:12:25.321 "name": "BaseBdev3", 00:12:25.321 "uuid": "e9d2e7cc-3d8a-50ff-922a-5f7ff5ee9662", 00:12:25.321 "is_configured": true, 00:12:25.321 "data_offset": 2048, 00:12:25.321 "data_size": 63488 00:12:25.321 } 00:12:25.321 ] 00:12:25.321 }' 00:12:25.321 18:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.321 18:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.579 18:10:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:25.579 18:10:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:25.838 [2024-12-06 18:10:51.176495] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:12:26.773 18:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:26.773 18:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.773 18:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.773 18:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.773 18:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:26.773 18:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:26.773 18:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:12:26.773 18:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:12:26.773 18:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:26.773 18:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:26.773 18:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:26.773 18:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:26.773 18:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:26.773 18:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.773 18:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.773 18:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.773 18:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.773 18:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.773 18:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.773 18:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.773 18:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.773 18:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.773 18:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.773 "name": "raid_bdev1", 00:12:26.773 "uuid": "e7193959-2f78-486a-b43e-1ab05f489029", 00:12:26.773 "strip_size_kb": 64, 00:12:26.773 "state": "online", 00:12:26.773 "raid_level": "concat", 00:12:26.773 "superblock": true, 00:12:26.773 "num_base_bdevs": 3, 00:12:26.773 "num_base_bdevs_discovered": 3, 00:12:26.773 "num_base_bdevs_operational": 3, 00:12:26.773 "base_bdevs_list": [ 00:12:26.773 { 00:12:26.774 "name": "BaseBdev1", 00:12:26.774 "uuid": "a9f91c31-e003-5386-a88d-b599a7b8ac90", 00:12:26.774 "is_configured": true, 00:12:26.774 "data_offset": 2048, 00:12:26.774 "data_size": 63488 00:12:26.774 }, 00:12:26.774 { 00:12:26.774 "name": "BaseBdev2", 00:12:26.774 "uuid": "89c9e680-a810-591b-8341-48424a306aeb", 00:12:26.774 "is_configured": true, 00:12:26.774 "data_offset": 2048, 00:12:26.774 "data_size": 63488 00:12:26.774 }, 00:12:26.774 { 00:12:26.774 "name": "BaseBdev3", 00:12:26.774 "uuid": "e9d2e7cc-3d8a-50ff-922a-5f7ff5ee9662", 00:12:26.774 "is_configured": true, 00:12:26.774 "data_offset": 2048, 00:12:26.774 "data_size": 63488 00:12:26.774 } 00:12:26.774 ] 00:12:26.774 }' 00:12:26.774 18:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.774 18:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.340 18:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:27.341 18:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.341 18:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.341 [2024-12-06 18:10:52.582001] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:27.341 [2024-12-06 18:10:52.582167] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:27.341 [2024-12-06 18:10:52.585761] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:27.341 [2024-12-06 18:10:52.585976] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:27.341 [2024-12-06 18:10:52.586192] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:27.341 [2024-12-06 18:10:52.586220] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:27.341 18:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.341 { 00:12:27.341 "results": [ 00:12:27.341 { 00:12:27.341 "job": "raid_bdev1", 00:12:27.341 "core_mask": "0x1", 00:12:27.341 "workload": "randrw", 00:12:27.341 "percentage": 50, 00:12:27.341 "status": "finished", 00:12:27.341 "queue_depth": 1, 00:12:27.341 "io_size": 131072, 00:12:27.341 "runtime": 1.403456, 00:12:27.341 "iops": 10221.196817000318, 00:12:27.341 "mibps": 1277.6496021250398, 00:12:27.341 "io_failed": 1, 00:12:27.341 "io_timeout": 0, 00:12:27.341 "avg_latency_us": 135.62830716195836, 00:12:27.341 "min_latency_us": 42.82181818181818, 00:12:27.341 "max_latency_us": 1832.0290909090909 00:12:27.341 } 00:12:27.341 ], 00:12:27.341 "core_count": 1 00:12:27.341 } 00:12:27.341 18:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67212 00:12:27.341 18:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67212 ']' 00:12:27.341 18:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67212 00:12:27.341 18:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:27.341 18:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:27.341 18:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67212 00:12:27.341 killing process with pid 67212 00:12:27.341 18:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:27.341 18:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:27.341 18:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67212' 00:12:27.341 18:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67212 00:12:27.341 [2024-12-06 18:10:52.617229] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:27.341 18:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67212 00:12:27.341 [2024-12-06 18:10:52.821174] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:28.718 18:10:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.hEwkwxc9fr 00:12:28.718 18:10:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:28.718 18:10:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:28.718 18:10:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:12:28.718 18:10:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:28.718 18:10:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:28.718 18:10:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:28.718 18:10:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:12:28.718 00:12:28.718 real 0m4.682s 00:12:28.718 user 0m5.795s 00:12:28.718 sys 0m0.575s 00:12:28.718 18:10:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:28.718 ************************************ 00:12:28.718 END TEST raid_read_error_test 00:12:28.718 ************************************ 00:12:28.718 18:10:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.718 18:10:53 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:12:28.718 18:10:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:28.718 18:10:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:28.718 18:10:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:28.718 ************************************ 00:12:28.718 START TEST raid_write_error_test 00:12:28.718 ************************************ 00:12:28.718 18:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:12:28.718 18:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:28.718 18:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:12:28.718 18:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:28.718 18:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:28.718 18:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:28.718 18:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:28.718 18:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:28.718 18:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:28.718 18:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:28.718 18:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:28.718 18:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:28.718 18:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:28.718 18:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:28.718 18:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:28.718 18:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:28.718 18:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:28.719 18:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:28.719 18:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:28.719 18:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:28.719 18:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:28.719 18:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:28.719 18:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:28.719 18:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:28.719 18:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:28.719 18:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:28.719 18:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.nYkEeBuI4K 00:12:28.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.719 18:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67358 00:12:28.719 18:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67358 00:12:28.719 18:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67358 ']' 00:12:28.719 18:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:28.719 18:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.719 18:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:28.719 18:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.719 18:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:28.719 18:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.719 [2024-12-06 18:10:54.090859] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:12:28.719 [2024-12-06 18:10:54.091013] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67358 ] 00:12:29.011 [2024-12-06 18:10:54.267437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.011 [2024-12-06 18:10:54.397558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.270 [2024-12-06 18:10:54.600304] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:29.270 [2024-12-06 18:10:54.600383] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:29.837 18:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:29.837 18:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:29.837 18:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:29.837 18:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:29.837 18:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.837 18:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.838 BaseBdev1_malloc 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.838 true 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.838 [2024-12-06 18:10:55.169054] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:29.838 [2024-12-06 18:10:55.169124] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.838 [2024-12-06 18:10:55.169153] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:29.838 [2024-12-06 18:10:55.169171] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.838 [2024-12-06 18:10:55.171978] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.838 [2024-12-06 18:10:55.172032] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:29.838 BaseBdev1 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.838 BaseBdev2_malloc 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.838 true 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.838 [2024-12-06 18:10:55.224254] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:29.838 [2024-12-06 18:10:55.224324] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.838 [2024-12-06 18:10:55.224349] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:29.838 [2024-12-06 18:10:55.224367] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.838 [2024-12-06 18:10:55.227144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.838 [2024-12-06 18:10:55.227325] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:29.838 BaseBdev2 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.838 BaseBdev3_malloc 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.838 true 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.838 [2024-12-06 18:10:55.301611] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:29.838 [2024-12-06 18:10:55.301680] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.838 [2024-12-06 18:10:55.301707] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:29.838 [2024-12-06 18:10:55.301725] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.838 [2024-12-06 18:10:55.304479] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.838 BaseBdev3 00:12:29.838 [2024-12-06 18:10:55.304666] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.838 [2024-12-06 18:10:55.309712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:29.838 [2024-12-06 18:10:55.312301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:29.838 [2024-12-06 18:10:55.312559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:29.838 [2024-12-06 18:10:55.312905] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:29.838 [2024-12-06 18:10:55.313038] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:29.838 [2024-12-06 18:10:55.313398] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:12:29.838 [2024-12-06 18:10:55.313737] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:29.838 [2024-12-06 18:10:55.313895] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:29.838 [2024-12-06 18:10:55.314256] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.838 18:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.096 18:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.096 "name": "raid_bdev1", 00:12:30.096 "uuid": "fa556a05-4677-40ea-a4b4-a91242ee28be", 00:12:30.096 "strip_size_kb": 64, 00:12:30.096 "state": "online", 00:12:30.096 "raid_level": "concat", 00:12:30.096 "superblock": true, 00:12:30.096 "num_base_bdevs": 3, 00:12:30.096 "num_base_bdevs_discovered": 3, 00:12:30.096 "num_base_bdevs_operational": 3, 00:12:30.096 "base_bdevs_list": [ 00:12:30.096 { 00:12:30.096 "name": "BaseBdev1", 00:12:30.096 "uuid": "50467ca8-6188-52b7-93a6-09da6efe4e5c", 00:12:30.096 "is_configured": true, 00:12:30.096 "data_offset": 2048, 00:12:30.096 "data_size": 63488 00:12:30.096 }, 00:12:30.096 { 00:12:30.096 "name": "BaseBdev2", 00:12:30.096 "uuid": "d93a6836-482f-5a17-9d34-b56fb180bef4", 00:12:30.096 "is_configured": true, 00:12:30.096 "data_offset": 2048, 00:12:30.096 "data_size": 63488 00:12:30.096 }, 00:12:30.096 { 00:12:30.096 "name": "BaseBdev3", 00:12:30.096 "uuid": "35872a04-f173-5582-a3f8-1fd875aa96f6", 00:12:30.096 "is_configured": true, 00:12:30.096 "data_offset": 2048, 00:12:30.096 "data_size": 63488 00:12:30.096 } 00:12:30.096 ] 00:12:30.096 }' 00:12:30.097 18:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.097 18:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.354 18:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:30.354 18:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:30.613 [2024-12-06 18:10:55.975807] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:12:31.548 18:10:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:31.548 18:10:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.548 18:10:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.548 18:10:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.548 18:10:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:31.548 18:10:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:31.548 18:10:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:12:31.548 18:10:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:12:31.548 18:10:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:31.548 18:10:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:31.548 18:10:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:31.548 18:10:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:31.548 18:10:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:31.548 18:10:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.548 18:10:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.548 18:10:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.548 18:10:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.548 18:10:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.548 18:10:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.548 18:10:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.548 18:10:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.548 18:10:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.548 18:10:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.548 "name": "raid_bdev1", 00:12:31.548 "uuid": "fa556a05-4677-40ea-a4b4-a91242ee28be", 00:12:31.548 "strip_size_kb": 64, 00:12:31.548 "state": "online", 00:12:31.548 "raid_level": "concat", 00:12:31.548 "superblock": true, 00:12:31.548 "num_base_bdevs": 3, 00:12:31.548 "num_base_bdevs_discovered": 3, 00:12:31.548 "num_base_bdevs_operational": 3, 00:12:31.548 "base_bdevs_list": [ 00:12:31.548 { 00:12:31.548 "name": "BaseBdev1", 00:12:31.548 "uuid": "50467ca8-6188-52b7-93a6-09da6efe4e5c", 00:12:31.548 "is_configured": true, 00:12:31.548 "data_offset": 2048, 00:12:31.548 "data_size": 63488 00:12:31.548 }, 00:12:31.548 { 00:12:31.548 "name": "BaseBdev2", 00:12:31.548 "uuid": "d93a6836-482f-5a17-9d34-b56fb180bef4", 00:12:31.548 "is_configured": true, 00:12:31.548 "data_offset": 2048, 00:12:31.548 "data_size": 63488 00:12:31.548 }, 00:12:31.548 { 00:12:31.548 "name": "BaseBdev3", 00:12:31.548 "uuid": "35872a04-f173-5582-a3f8-1fd875aa96f6", 00:12:31.548 "is_configured": true, 00:12:31.548 "data_offset": 2048, 00:12:31.548 "data_size": 63488 00:12:31.548 } 00:12:31.548 ] 00:12:31.548 }' 00:12:31.548 18:10:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.548 18:10:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.114 18:10:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:32.114 18:10:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.114 18:10:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.114 [2024-12-06 18:10:57.406357] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:32.114 [2024-12-06 18:10:57.407087] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:32.114 { 00:12:32.114 "results": [ 00:12:32.114 { 00:12:32.114 "job": "raid_bdev1", 00:12:32.114 "core_mask": "0x1", 00:12:32.114 "workload": "randrw", 00:12:32.114 "percentage": 50, 00:12:32.114 "status": "finished", 00:12:32.114 "queue_depth": 1, 00:12:32.114 "io_size": 131072, 00:12:32.114 "runtime": 1.428846, 00:12:32.114 "iops": 10141.750755504792, 00:12:32.114 "mibps": 1267.718844438099, 00:12:32.114 "io_failed": 1, 00:12:32.114 "io_timeout": 0, 00:12:32.114 "avg_latency_us": 136.8043319198053, 00:12:32.114 "min_latency_us": 42.82181818181818, 00:12:32.114 "max_latency_us": 1832.0290909090909 00:12:32.114 } 00:12:32.114 ], 00:12:32.114 "core_count": 1 00:12:32.114 } 00:12:32.114 [2024-12-06 18:10:57.411531] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:32.114 [2024-12-06 18:10:57.411681] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:32.114 [2024-12-06 18:10:57.411759] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:32.114 [2024-12-06 18:10:57.411802] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:32.114 18:10:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.114 18:10:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67358 00:12:32.114 18:10:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67358 ']' 00:12:32.114 18:10:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67358 00:12:32.114 18:10:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:32.114 18:10:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:32.114 18:10:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67358 00:12:32.114 killing process with pid 67358 00:12:32.114 18:10:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:32.114 18:10:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:32.114 18:10:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67358' 00:12:32.114 18:10:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67358 00:12:32.114 [2024-12-06 18:10:57.446131] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:32.114 18:10:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67358 00:12:32.373 [2024-12-06 18:10:57.699781] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:33.747 18:10:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.nYkEeBuI4K 00:12:33.747 18:10:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:33.747 18:10:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:33.747 18:10:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:12:33.747 18:10:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:33.747 ************************************ 00:12:33.747 END TEST raid_write_error_test 00:12:33.747 ************************************ 00:12:33.747 18:10:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:33.747 18:10:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:33.747 18:10:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:12:33.747 00:12:33.747 real 0m4.923s 00:12:33.747 user 0m6.095s 00:12:33.747 sys 0m0.587s 00:12:33.747 18:10:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:33.747 18:10:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.747 18:10:58 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:33.747 18:10:58 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:12:33.747 18:10:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:33.747 18:10:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:33.747 18:10:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:33.747 ************************************ 00:12:33.747 START TEST raid_state_function_test 00:12:33.747 ************************************ 00:12:33.748 18:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:12:33.748 18:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:33.748 18:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:33.748 18:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:33.748 18:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:33.748 18:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:33.748 18:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:33.748 18:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:33.748 18:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:33.748 18:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:33.748 18:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:33.748 18:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:33.748 18:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:33.748 18:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:33.748 18:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:33.748 18:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:33.748 18:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:33.748 18:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:33.748 18:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:33.748 18:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:33.748 18:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:33.748 18:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:33.748 18:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:33.748 18:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:33.748 18:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:33.748 18:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:33.748 18:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67508 00:12:33.748 18:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67508' 00:12:33.748 Process raid pid: 67508 00:12:33.748 18:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:33.748 18:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67508 00:12:33.748 18:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67508 ']' 00:12:33.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.748 18:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.748 18:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:33.748 18:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.748 18:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:33.748 18:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.748 [2024-12-06 18:10:59.059177] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:12:33.748 [2024-12-06 18:10:59.059339] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:33.748 [2024-12-06 18:10:59.241950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.006 [2024-12-06 18:10:59.407938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.265 [2024-12-06 18:10:59.612853] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:34.265 [2024-12-06 18:10:59.612891] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:34.524 18:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:34.524 18:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:34.524 18:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:34.524 18:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.525 18:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.525 [2024-12-06 18:11:00.028189] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:34.525 [2024-12-06 18:11:00.028400] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:34.525 [2024-12-06 18:11:00.028430] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:34.525 [2024-12-06 18:11:00.028449] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:34.525 [2024-12-06 18:11:00.028459] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:34.525 [2024-12-06 18:11:00.028482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:34.525 18:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.525 18:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:34.525 18:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:34.525 18:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:34.525 18:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:34.525 18:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:34.525 18:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:34.525 18:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.525 18:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.525 18:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.525 18:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.525 18:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.525 18:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.525 18:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.525 18:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.783 18:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.783 18:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.783 "name": "Existed_Raid", 00:12:34.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.783 "strip_size_kb": 0, 00:12:34.783 "state": "configuring", 00:12:34.783 "raid_level": "raid1", 00:12:34.783 "superblock": false, 00:12:34.783 "num_base_bdevs": 3, 00:12:34.783 "num_base_bdevs_discovered": 0, 00:12:34.783 "num_base_bdevs_operational": 3, 00:12:34.783 "base_bdevs_list": [ 00:12:34.783 { 00:12:34.783 "name": "BaseBdev1", 00:12:34.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.783 "is_configured": false, 00:12:34.783 "data_offset": 0, 00:12:34.783 "data_size": 0 00:12:34.783 }, 00:12:34.783 { 00:12:34.783 "name": "BaseBdev2", 00:12:34.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.783 "is_configured": false, 00:12:34.783 "data_offset": 0, 00:12:34.783 "data_size": 0 00:12:34.783 }, 00:12:34.783 { 00:12:34.783 "name": "BaseBdev3", 00:12:34.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.784 "is_configured": false, 00:12:34.784 "data_offset": 0, 00:12:34.784 "data_size": 0 00:12:34.784 } 00:12:34.784 ] 00:12:34.784 }' 00:12:34.784 18:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.784 18:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.042 18:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:35.042 18:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.042 18:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.042 [2024-12-06 18:11:00.544336] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:35.042 [2024-12-06 18:11:00.544378] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:35.042 18:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.042 18:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:35.042 18:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.042 18:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.042 [2024-12-06 18:11:00.552268] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:35.042 [2024-12-06 18:11:00.552503] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:35.042 [2024-12-06 18:11:00.552633] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:35.042 [2024-12-06 18:11:00.552785] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:35.042 [2024-12-06 18:11:00.552901] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:35.042 [2024-12-06 18:11:00.552961] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:35.042 18:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.042 18:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:35.042 18:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.042 18:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.354 BaseBdev1 00:12:35.354 [2024-12-06 18:11:00.598066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:35.354 18:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.354 18:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:35.354 18:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:35.354 18:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:35.354 18:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:35.354 18:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:35.354 18:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:35.354 18:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:35.354 18:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.354 18:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.354 18:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.354 18:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:35.354 18:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.354 18:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.354 [ 00:12:35.354 { 00:12:35.354 "name": "BaseBdev1", 00:12:35.354 "aliases": [ 00:12:35.354 "092a6656-2849-4d1f-95d0-697dd685bfba" 00:12:35.354 ], 00:12:35.354 "product_name": "Malloc disk", 00:12:35.354 "block_size": 512, 00:12:35.354 "num_blocks": 65536, 00:12:35.354 "uuid": "092a6656-2849-4d1f-95d0-697dd685bfba", 00:12:35.354 "assigned_rate_limits": { 00:12:35.354 "rw_ios_per_sec": 0, 00:12:35.354 "rw_mbytes_per_sec": 0, 00:12:35.354 "r_mbytes_per_sec": 0, 00:12:35.354 "w_mbytes_per_sec": 0 00:12:35.354 }, 00:12:35.354 "claimed": true, 00:12:35.354 "claim_type": "exclusive_write", 00:12:35.354 "zoned": false, 00:12:35.354 "supported_io_types": { 00:12:35.354 "read": true, 00:12:35.354 "write": true, 00:12:35.354 "unmap": true, 00:12:35.355 "flush": true, 00:12:35.355 "reset": true, 00:12:35.355 "nvme_admin": false, 00:12:35.355 "nvme_io": false, 00:12:35.355 "nvme_io_md": false, 00:12:35.355 "write_zeroes": true, 00:12:35.355 "zcopy": true, 00:12:35.355 "get_zone_info": false, 00:12:35.355 "zone_management": false, 00:12:35.355 "zone_append": false, 00:12:35.355 "compare": false, 00:12:35.355 "compare_and_write": false, 00:12:35.355 "abort": true, 00:12:35.355 "seek_hole": false, 00:12:35.355 "seek_data": false, 00:12:35.355 "copy": true, 00:12:35.355 "nvme_iov_md": false 00:12:35.355 }, 00:12:35.355 "memory_domains": [ 00:12:35.355 { 00:12:35.355 "dma_device_id": "system", 00:12:35.355 "dma_device_type": 1 00:12:35.355 }, 00:12:35.355 { 00:12:35.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.355 "dma_device_type": 2 00:12:35.355 } 00:12:35.355 ], 00:12:35.355 "driver_specific": {} 00:12:35.355 } 00:12:35.355 ] 00:12:35.355 18:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.355 18:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:35.355 18:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:35.355 18:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:35.355 18:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:35.355 18:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:35.355 18:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:35.355 18:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:35.355 18:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.355 18:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.355 18:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.355 18:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.355 18:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:35.355 18:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.355 18:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.355 18:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.355 18:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.355 18:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.355 "name": "Existed_Raid", 00:12:35.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.355 "strip_size_kb": 0, 00:12:35.355 "state": "configuring", 00:12:35.355 "raid_level": "raid1", 00:12:35.355 "superblock": false, 00:12:35.355 "num_base_bdevs": 3, 00:12:35.355 "num_base_bdevs_discovered": 1, 00:12:35.355 "num_base_bdevs_operational": 3, 00:12:35.355 "base_bdevs_list": [ 00:12:35.355 { 00:12:35.355 "name": "BaseBdev1", 00:12:35.355 "uuid": "092a6656-2849-4d1f-95d0-697dd685bfba", 00:12:35.355 "is_configured": true, 00:12:35.355 "data_offset": 0, 00:12:35.355 "data_size": 65536 00:12:35.355 }, 00:12:35.355 { 00:12:35.355 "name": "BaseBdev2", 00:12:35.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.355 "is_configured": false, 00:12:35.355 "data_offset": 0, 00:12:35.355 "data_size": 0 00:12:35.355 }, 00:12:35.355 { 00:12:35.355 "name": "BaseBdev3", 00:12:35.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.355 "is_configured": false, 00:12:35.355 "data_offset": 0, 00:12:35.355 "data_size": 0 00:12:35.355 } 00:12:35.355 ] 00:12:35.355 }' 00:12:35.355 18:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.355 18:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.922 18:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:35.922 18:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.922 18:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.922 [2024-12-06 18:11:01.138250] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:35.922 [2024-12-06 18:11:01.138313] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:35.922 18:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.922 18:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:35.922 18:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.922 18:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.922 [2024-12-06 18:11:01.146285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:35.922 [2024-12-06 18:11:01.148960] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:35.922 [2024-12-06 18:11:01.149136] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:35.922 [2024-12-06 18:11:01.149257] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:35.922 [2024-12-06 18:11:01.149318] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:35.922 18:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.922 18:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:35.922 18:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:35.922 18:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:35.922 18:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:35.922 18:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:35.922 18:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:35.922 18:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:35.922 18:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:35.922 18:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.922 18:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.922 18:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.922 18:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.922 18:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.922 18:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:35.922 18:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.922 18:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.922 18:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.922 18:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.922 "name": "Existed_Raid", 00:12:35.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.922 "strip_size_kb": 0, 00:12:35.922 "state": "configuring", 00:12:35.922 "raid_level": "raid1", 00:12:35.922 "superblock": false, 00:12:35.922 "num_base_bdevs": 3, 00:12:35.922 "num_base_bdevs_discovered": 1, 00:12:35.922 "num_base_bdevs_operational": 3, 00:12:35.922 "base_bdevs_list": [ 00:12:35.922 { 00:12:35.922 "name": "BaseBdev1", 00:12:35.922 "uuid": "092a6656-2849-4d1f-95d0-697dd685bfba", 00:12:35.922 "is_configured": true, 00:12:35.922 "data_offset": 0, 00:12:35.922 "data_size": 65536 00:12:35.922 }, 00:12:35.922 { 00:12:35.922 "name": "BaseBdev2", 00:12:35.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.922 "is_configured": false, 00:12:35.922 "data_offset": 0, 00:12:35.922 "data_size": 0 00:12:35.922 }, 00:12:35.922 { 00:12:35.922 "name": "BaseBdev3", 00:12:35.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.922 "is_configured": false, 00:12:35.922 "data_offset": 0, 00:12:35.922 "data_size": 0 00:12:35.922 } 00:12:35.922 ] 00:12:35.922 }' 00:12:35.922 18:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.922 18:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.182 18:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:36.182 18:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.182 18:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.182 [2024-12-06 18:11:01.692937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:36.182 BaseBdev2 00:12:36.182 18:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.182 18:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:36.182 18:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:36.182 18:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:36.182 18:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:36.182 18:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:36.182 18:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:36.182 18:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:36.182 18:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.182 18:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.441 18:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.441 18:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:36.441 18:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.441 18:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.441 [ 00:12:36.441 { 00:12:36.441 "name": "BaseBdev2", 00:12:36.441 "aliases": [ 00:12:36.441 "c570abf3-b821-403c-8599-87035615cca4" 00:12:36.441 ], 00:12:36.441 "product_name": "Malloc disk", 00:12:36.441 "block_size": 512, 00:12:36.441 "num_blocks": 65536, 00:12:36.441 "uuid": "c570abf3-b821-403c-8599-87035615cca4", 00:12:36.441 "assigned_rate_limits": { 00:12:36.441 "rw_ios_per_sec": 0, 00:12:36.441 "rw_mbytes_per_sec": 0, 00:12:36.441 "r_mbytes_per_sec": 0, 00:12:36.441 "w_mbytes_per_sec": 0 00:12:36.441 }, 00:12:36.441 "claimed": true, 00:12:36.441 "claim_type": "exclusive_write", 00:12:36.441 "zoned": false, 00:12:36.441 "supported_io_types": { 00:12:36.441 "read": true, 00:12:36.441 "write": true, 00:12:36.441 "unmap": true, 00:12:36.441 "flush": true, 00:12:36.441 "reset": true, 00:12:36.441 "nvme_admin": false, 00:12:36.441 "nvme_io": false, 00:12:36.441 "nvme_io_md": false, 00:12:36.441 "write_zeroes": true, 00:12:36.441 "zcopy": true, 00:12:36.441 "get_zone_info": false, 00:12:36.441 "zone_management": false, 00:12:36.441 "zone_append": false, 00:12:36.441 "compare": false, 00:12:36.441 "compare_and_write": false, 00:12:36.441 "abort": true, 00:12:36.441 "seek_hole": false, 00:12:36.441 "seek_data": false, 00:12:36.441 "copy": true, 00:12:36.441 "nvme_iov_md": false 00:12:36.441 }, 00:12:36.441 "memory_domains": [ 00:12:36.441 { 00:12:36.441 "dma_device_id": "system", 00:12:36.441 "dma_device_type": 1 00:12:36.441 }, 00:12:36.441 { 00:12:36.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.441 "dma_device_type": 2 00:12:36.441 } 00:12:36.441 ], 00:12:36.441 "driver_specific": {} 00:12:36.441 } 00:12:36.441 ] 00:12:36.441 18:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.441 18:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:36.441 18:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:36.441 18:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:36.441 18:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:36.441 18:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:36.441 18:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:36.441 18:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.441 18:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.441 18:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:36.441 18:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.441 18:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.441 18:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.441 18:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.441 18:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.441 18:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:36.441 18:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.441 18:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.441 18:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.441 18:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.441 "name": "Existed_Raid", 00:12:36.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.441 "strip_size_kb": 0, 00:12:36.441 "state": "configuring", 00:12:36.441 "raid_level": "raid1", 00:12:36.441 "superblock": false, 00:12:36.441 "num_base_bdevs": 3, 00:12:36.441 "num_base_bdevs_discovered": 2, 00:12:36.441 "num_base_bdevs_operational": 3, 00:12:36.441 "base_bdevs_list": [ 00:12:36.441 { 00:12:36.441 "name": "BaseBdev1", 00:12:36.441 "uuid": "092a6656-2849-4d1f-95d0-697dd685bfba", 00:12:36.441 "is_configured": true, 00:12:36.441 "data_offset": 0, 00:12:36.441 "data_size": 65536 00:12:36.442 }, 00:12:36.442 { 00:12:36.442 "name": "BaseBdev2", 00:12:36.442 "uuid": "c570abf3-b821-403c-8599-87035615cca4", 00:12:36.442 "is_configured": true, 00:12:36.442 "data_offset": 0, 00:12:36.442 "data_size": 65536 00:12:36.442 }, 00:12:36.442 { 00:12:36.442 "name": "BaseBdev3", 00:12:36.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.442 "is_configured": false, 00:12:36.442 "data_offset": 0, 00:12:36.442 "data_size": 0 00:12:36.442 } 00:12:36.442 ] 00:12:36.442 }' 00:12:36.442 18:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.442 18:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.009 18:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:37.009 18:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.009 18:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.009 [2024-12-06 18:11:02.297397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:37.009 [2024-12-06 18:11:02.297455] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:37.009 [2024-12-06 18:11:02.297475] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:37.009 [2024-12-06 18:11:02.297865] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:37.009 [2024-12-06 18:11:02.298099] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:37.009 [2024-12-06 18:11:02.298116] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:37.009 BaseBdev3 00:12:37.009 [2024-12-06 18:11:02.298471] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:37.009 18:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.009 18:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:37.009 18:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:37.009 18:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:37.009 18:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:37.009 18:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:37.009 18:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:37.009 18:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:37.009 18:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.009 18:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.009 18:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.009 18:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:37.009 18:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.009 18:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.009 [ 00:12:37.009 { 00:12:37.009 "name": "BaseBdev3", 00:12:37.009 "aliases": [ 00:12:37.009 "1209d24b-85bb-41ae-91c1-52d6b1ef100a" 00:12:37.009 ], 00:12:37.009 "product_name": "Malloc disk", 00:12:37.009 "block_size": 512, 00:12:37.009 "num_blocks": 65536, 00:12:37.009 "uuid": "1209d24b-85bb-41ae-91c1-52d6b1ef100a", 00:12:37.009 "assigned_rate_limits": { 00:12:37.009 "rw_ios_per_sec": 0, 00:12:37.009 "rw_mbytes_per_sec": 0, 00:12:37.009 "r_mbytes_per_sec": 0, 00:12:37.009 "w_mbytes_per_sec": 0 00:12:37.009 }, 00:12:37.009 "claimed": true, 00:12:37.009 "claim_type": "exclusive_write", 00:12:37.009 "zoned": false, 00:12:37.009 "supported_io_types": { 00:12:37.009 "read": true, 00:12:37.009 "write": true, 00:12:37.009 "unmap": true, 00:12:37.009 "flush": true, 00:12:37.009 "reset": true, 00:12:37.009 "nvme_admin": false, 00:12:37.009 "nvme_io": false, 00:12:37.009 "nvme_io_md": false, 00:12:37.009 "write_zeroes": true, 00:12:37.009 "zcopy": true, 00:12:37.009 "get_zone_info": false, 00:12:37.009 "zone_management": false, 00:12:37.009 "zone_append": false, 00:12:37.009 "compare": false, 00:12:37.009 "compare_and_write": false, 00:12:37.009 "abort": true, 00:12:37.009 "seek_hole": false, 00:12:37.009 "seek_data": false, 00:12:37.009 "copy": true, 00:12:37.009 "nvme_iov_md": false 00:12:37.009 }, 00:12:37.009 "memory_domains": [ 00:12:37.009 { 00:12:37.009 "dma_device_id": "system", 00:12:37.009 "dma_device_type": 1 00:12:37.009 }, 00:12:37.009 { 00:12:37.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.009 "dma_device_type": 2 00:12:37.009 } 00:12:37.009 ], 00:12:37.010 "driver_specific": {} 00:12:37.010 } 00:12:37.010 ] 00:12:37.010 18:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.010 18:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:37.010 18:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:37.010 18:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:37.010 18:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:37.010 18:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:37.010 18:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:37.010 18:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.010 18:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.010 18:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:37.010 18:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.010 18:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.010 18:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.010 18:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.010 18:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.010 18:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.010 18:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:37.010 18:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.010 18:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.010 18:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.010 "name": "Existed_Raid", 00:12:37.010 "uuid": "f2618246-10f1-4fa6-b19d-1f2c49d4869b", 00:12:37.010 "strip_size_kb": 0, 00:12:37.010 "state": "online", 00:12:37.010 "raid_level": "raid1", 00:12:37.010 "superblock": false, 00:12:37.010 "num_base_bdevs": 3, 00:12:37.010 "num_base_bdevs_discovered": 3, 00:12:37.010 "num_base_bdevs_operational": 3, 00:12:37.010 "base_bdevs_list": [ 00:12:37.010 { 00:12:37.010 "name": "BaseBdev1", 00:12:37.010 "uuid": "092a6656-2849-4d1f-95d0-697dd685bfba", 00:12:37.010 "is_configured": true, 00:12:37.010 "data_offset": 0, 00:12:37.010 "data_size": 65536 00:12:37.010 }, 00:12:37.010 { 00:12:37.010 "name": "BaseBdev2", 00:12:37.010 "uuid": "c570abf3-b821-403c-8599-87035615cca4", 00:12:37.010 "is_configured": true, 00:12:37.010 "data_offset": 0, 00:12:37.010 "data_size": 65536 00:12:37.010 }, 00:12:37.010 { 00:12:37.010 "name": "BaseBdev3", 00:12:37.010 "uuid": "1209d24b-85bb-41ae-91c1-52d6b1ef100a", 00:12:37.010 "is_configured": true, 00:12:37.010 "data_offset": 0, 00:12:37.010 "data_size": 65536 00:12:37.010 } 00:12:37.010 ] 00:12:37.010 }' 00:12:37.010 18:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.010 18:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.578 18:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:37.578 18:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:37.578 18:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:37.578 18:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:37.578 18:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:37.578 18:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:37.578 18:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:37.578 18:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.578 18:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.578 18:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:37.578 [2024-12-06 18:11:02.850024] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:37.578 18:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.578 18:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:37.578 "name": "Existed_Raid", 00:12:37.578 "aliases": [ 00:12:37.578 "f2618246-10f1-4fa6-b19d-1f2c49d4869b" 00:12:37.578 ], 00:12:37.578 "product_name": "Raid Volume", 00:12:37.578 "block_size": 512, 00:12:37.578 "num_blocks": 65536, 00:12:37.578 "uuid": "f2618246-10f1-4fa6-b19d-1f2c49d4869b", 00:12:37.578 "assigned_rate_limits": { 00:12:37.578 "rw_ios_per_sec": 0, 00:12:37.578 "rw_mbytes_per_sec": 0, 00:12:37.578 "r_mbytes_per_sec": 0, 00:12:37.578 "w_mbytes_per_sec": 0 00:12:37.578 }, 00:12:37.578 "claimed": false, 00:12:37.578 "zoned": false, 00:12:37.578 "supported_io_types": { 00:12:37.578 "read": true, 00:12:37.578 "write": true, 00:12:37.578 "unmap": false, 00:12:37.578 "flush": false, 00:12:37.578 "reset": true, 00:12:37.578 "nvme_admin": false, 00:12:37.578 "nvme_io": false, 00:12:37.578 "nvme_io_md": false, 00:12:37.578 "write_zeroes": true, 00:12:37.578 "zcopy": false, 00:12:37.578 "get_zone_info": false, 00:12:37.578 "zone_management": false, 00:12:37.578 "zone_append": false, 00:12:37.578 "compare": false, 00:12:37.578 "compare_and_write": false, 00:12:37.578 "abort": false, 00:12:37.578 "seek_hole": false, 00:12:37.578 "seek_data": false, 00:12:37.578 "copy": false, 00:12:37.578 "nvme_iov_md": false 00:12:37.578 }, 00:12:37.578 "memory_domains": [ 00:12:37.578 { 00:12:37.578 "dma_device_id": "system", 00:12:37.578 "dma_device_type": 1 00:12:37.578 }, 00:12:37.578 { 00:12:37.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.578 "dma_device_type": 2 00:12:37.578 }, 00:12:37.578 { 00:12:37.578 "dma_device_id": "system", 00:12:37.578 "dma_device_type": 1 00:12:37.578 }, 00:12:37.578 { 00:12:37.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.578 "dma_device_type": 2 00:12:37.578 }, 00:12:37.578 { 00:12:37.578 "dma_device_id": "system", 00:12:37.578 "dma_device_type": 1 00:12:37.578 }, 00:12:37.578 { 00:12:37.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.578 "dma_device_type": 2 00:12:37.578 } 00:12:37.578 ], 00:12:37.578 "driver_specific": { 00:12:37.578 "raid": { 00:12:37.578 "uuid": "f2618246-10f1-4fa6-b19d-1f2c49d4869b", 00:12:37.578 "strip_size_kb": 0, 00:12:37.578 "state": "online", 00:12:37.578 "raid_level": "raid1", 00:12:37.578 "superblock": false, 00:12:37.578 "num_base_bdevs": 3, 00:12:37.578 "num_base_bdevs_discovered": 3, 00:12:37.578 "num_base_bdevs_operational": 3, 00:12:37.578 "base_bdevs_list": [ 00:12:37.578 { 00:12:37.578 "name": "BaseBdev1", 00:12:37.578 "uuid": "092a6656-2849-4d1f-95d0-697dd685bfba", 00:12:37.578 "is_configured": true, 00:12:37.578 "data_offset": 0, 00:12:37.578 "data_size": 65536 00:12:37.578 }, 00:12:37.578 { 00:12:37.578 "name": "BaseBdev2", 00:12:37.578 "uuid": "c570abf3-b821-403c-8599-87035615cca4", 00:12:37.578 "is_configured": true, 00:12:37.578 "data_offset": 0, 00:12:37.578 "data_size": 65536 00:12:37.578 }, 00:12:37.578 { 00:12:37.578 "name": "BaseBdev3", 00:12:37.578 "uuid": "1209d24b-85bb-41ae-91c1-52d6b1ef100a", 00:12:37.578 "is_configured": true, 00:12:37.578 "data_offset": 0, 00:12:37.578 "data_size": 65536 00:12:37.578 } 00:12:37.578 ] 00:12:37.578 } 00:12:37.578 } 00:12:37.578 }' 00:12:37.578 18:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:37.579 18:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:37.579 BaseBdev2 00:12:37.579 BaseBdev3' 00:12:37.579 18:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:37.579 18:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:37.579 18:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:37.579 18:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:37.579 18:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.579 18:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:37.579 18:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.579 18:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.579 18:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:37.579 18:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:37.579 18:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:37.579 18:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:37.579 18:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.579 18:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.579 18:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:37.579 18:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.837 18:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:37.837 18:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:37.837 18:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:37.837 18:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:37.837 18:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:37.837 18:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.837 18:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.837 18:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.837 18:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:37.837 18:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:37.837 18:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:37.837 18:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.837 18:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.837 [2024-12-06 18:11:03.165817] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:37.837 18:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.837 18:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:37.837 18:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:37.837 18:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:37.837 18:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:37.837 18:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:37.837 18:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:12:37.837 18:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:37.837 18:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:37.837 18:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.837 18:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.837 18:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:37.837 18:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.837 18:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.837 18:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.837 18:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.837 18:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.837 18:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:37.837 18:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.837 18:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.837 18:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.837 18:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.837 "name": "Existed_Raid", 00:12:37.837 "uuid": "f2618246-10f1-4fa6-b19d-1f2c49d4869b", 00:12:37.837 "strip_size_kb": 0, 00:12:37.837 "state": "online", 00:12:37.837 "raid_level": "raid1", 00:12:37.837 "superblock": false, 00:12:37.837 "num_base_bdevs": 3, 00:12:37.837 "num_base_bdevs_discovered": 2, 00:12:37.837 "num_base_bdevs_operational": 2, 00:12:37.837 "base_bdevs_list": [ 00:12:37.837 { 00:12:37.837 "name": null, 00:12:37.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.837 "is_configured": false, 00:12:37.837 "data_offset": 0, 00:12:37.837 "data_size": 65536 00:12:37.837 }, 00:12:37.837 { 00:12:37.837 "name": "BaseBdev2", 00:12:37.837 "uuid": "c570abf3-b821-403c-8599-87035615cca4", 00:12:37.837 "is_configured": true, 00:12:37.837 "data_offset": 0, 00:12:37.838 "data_size": 65536 00:12:37.838 }, 00:12:37.838 { 00:12:37.838 "name": "BaseBdev3", 00:12:37.838 "uuid": "1209d24b-85bb-41ae-91c1-52d6b1ef100a", 00:12:37.838 "is_configured": true, 00:12:37.838 "data_offset": 0, 00:12:37.838 "data_size": 65536 00:12:37.838 } 00:12:37.838 ] 00:12:37.838 }' 00:12:37.838 18:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.838 18:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.404 18:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:38.404 18:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:38.404 18:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.404 18:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.404 18:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.404 18:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:38.404 18:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.404 18:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:38.404 18:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:38.404 18:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:38.404 18:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.404 18:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.404 [2024-12-06 18:11:03.818445] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:38.404 18:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.404 18:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:38.404 18:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:38.404 18:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.404 18:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.404 18:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.404 18:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:38.662 18:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.663 18:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:38.663 18:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:38.663 18:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:38.663 18:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.663 18:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.663 [2024-12-06 18:11:03.965473] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:38.663 [2024-12-06 18:11:03.965744] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:38.663 [2024-12-06 18:11:04.051398] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:38.663 [2024-12-06 18:11:04.051618] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:38.663 [2024-12-06 18:11:04.051654] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:38.663 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.663 18:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:38.663 18:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:38.663 18:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.663 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.663 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.663 18:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:38.663 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.663 18:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:38.663 18:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:38.663 18:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:38.663 18:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:38.663 18:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:38.663 18:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:38.663 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.663 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.663 BaseBdev2 00:12:38.663 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.663 18:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:38.663 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:38.663 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:38.663 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:38.663 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:38.663 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:38.663 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:38.663 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.663 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.663 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.663 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:38.663 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.663 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.663 [ 00:12:38.663 { 00:12:38.663 "name": "BaseBdev2", 00:12:38.663 "aliases": [ 00:12:38.663 "d69885ac-1cf6-43c9-8a30-53221a575d17" 00:12:38.663 ], 00:12:38.663 "product_name": "Malloc disk", 00:12:38.663 "block_size": 512, 00:12:38.663 "num_blocks": 65536, 00:12:38.663 "uuid": "d69885ac-1cf6-43c9-8a30-53221a575d17", 00:12:38.663 "assigned_rate_limits": { 00:12:38.663 "rw_ios_per_sec": 0, 00:12:38.663 "rw_mbytes_per_sec": 0, 00:12:38.663 "r_mbytes_per_sec": 0, 00:12:38.663 "w_mbytes_per_sec": 0 00:12:38.663 }, 00:12:38.663 "claimed": false, 00:12:38.663 "zoned": false, 00:12:38.663 "supported_io_types": { 00:12:38.663 "read": true, 00:12:38.663 "write": true, 00:12:38.663 "unmap": true, 00:12:38.663 "flush": true, 00:12:38.663 "reset": true, 00:12:38.663 "nvme_admin": false, 00:12:38.663 "nvme_io": false, 00:12:38.663 "nvme_io_md": false, 00:12:38.663 "write_zeroes": true, 00:12:38.663 "zcopy": true, 00:12:38.663 "get_zone_info": false, 00:12:38.663 "zone_management": false, 00:12:38.663 "zone_append": false, 00:12:38.663 "compare": false, 00:12:38.663 "compare_and_write": false, 00:12:38.663 "abort": true, 00:12:38.663 "seek_hole": false, 00:12:38.922 "seek_data": false, 00:12:38.922 "copy": true, 00:12:38.922 "nvme_iov_md": false 00:12:38.922 }, 00:12:38.922 "memory_domains": [ 00:12:38.922 { 00:12:38.922 "dma_device_id": "system", 00:12:38.922 "dma_device_type": 1 00:12:38.922 }, 00:12:38.922 { 00:12:38.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.922 "dma_device_type": 2 00:12:38.922 } 00:12:38.922 ], 00:12:38.922 "driver_specific": {} 00:12:38.922 } 00:12:38.922 ] 00:12:38.922 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.922 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:38.922 18:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:38.922 18:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:38.922 18:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:38.922 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.922 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.922 BaseBdev3 00:12:38.922 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.922 18:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:38.922 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:38.922 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:38.922 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:38.922 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:38.922 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:38.922 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:38.922 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.922 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.922 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.922 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:38.922 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.922 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.922 [ 00:12:38.922 { 00:12:38.922 "name": "BaseBdev3", 00:12:38.922 "aliases": [ 00:12:38.922 "f076c82c-70fb-4372-9de7-4e12d89403ef" 00:12:38.922 ], 00:12:38.922 "product_name": "Malloc disk", 00:12:38.922 "block_size": 512, 00:12:38.922 "num_blocks": 65536, 00:12:38.922 "uuid": "f076c82c-70fb-4372-9de7-4e12d89403ef", 00:12:38.922 "assigned_rate_limits": { 00:12:38.922 "rw_ios_per_sec": 0, 00:12:38.922 "rw_mbytes_per_sec": 0, 00:12:38.922 "r_mbytes_per_sec": 0, 00:12:38.922 "w_mbytes_per_sec": 0 00:12:38.922 }, 00:12:38.922 "claimed": false, 00:12:38.922 "zoned": false, 00:12:38.922 "supported_io_types": { 00:12:38.922 "read": true, 00:12:38.922 "write": true, 00:12:38.922 "unmap": true, 00:12:38.922 "flush": true, 00:12:38.922 "reset": true, 00:12:38.922 "nvme_admin": false, 00:12:38.922 "nvme_io": false, 00:12:38.922 "nvme_io_md": false, 00:12:38.922 "write_zeroes": true, 00:12:38.922 "zcopy": true, 00:12:38.922 "get_zone_info": false, 00:12:38.922 "zone_management": false, 00:12:38.922 "zone_append": false, 00:12:38.922 "compare": false, 00:12:38.922 "compare_and_write": false, 00:12:38.922 "abort": true, 00:12:38.922 "seek_hole": false, 00:12:38.922 "seek_data": false, 00:12:38.922 "copy": true, 00:12:38.922 "nvme_iov_md": false 00:12:38.922 }, 00:12:38.922 "memory_domains": [ 00:12:38.922 { 00:12:38.922 "dma_device_id": "system", 00:12:38.922 "dma_device_type": 1 00:12:38.922 }, 00:12:38.922 { 00:12:38.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.922 "dma_device_type": 2 00:12:38.922 } 00:12:38.922 ], 00:12:38.922 "driver_specific": {} 00:12:38.922 } 00:12:38.922 ] 00:12:38.922 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.922 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:38.922 18:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:38.922 18:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:38.922 18:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:38.922 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.922 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.922 [2024-12-06 18:11:04.261606] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:38.922 [2024-12-06 18:11:04.261812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:38.922 [2024-12-06 18:11:04.261968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:38.922 [2024-12-06 18:11:04.264528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:38.922 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.922 18:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:38.922 18:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:38.922 18:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:38.922 18:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.922 18:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.922 18:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:38.922 18:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.922 18:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.922 18:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.922 18:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.922 18:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.922 18:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:38.922 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.922 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.922 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.922 18:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.922 "name": "Existed_Raid", 00:12:38.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.922 "strip_size_kb": 0, 00:12:38.922 "state": "configuring", 00:12:38.922 "raid_level": "raid1", 00:12:38.922 "superblock": false, 00:12:38.922 "num_base_bdevs": 3, 00:12:38.922 "num_base_bdevs_discovered": 2, 00:12:38.922 "num_base_bdevs_operational": 3, 00:12:38.922 "base_bdevs_list": [ 00:12:38.922 { 00:12:38.922 "name": "BaseBdev1", 00:12:38.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.922 "is_configured": false, 00:12:38.922 "data_offset": 0, 00:12:38.922 "data_size": 0 00:12:38.922 }, 00:12:38.922 { 00:12:38.922 "name": "BaseBdev2", 00:12:38.922 "uuid": "d69885ac-1cf6-43c9-8a30-53221a575d17", 00:12:38.922 "is_configured": true, 00:12:38.922 "data_offset": 0, 00:12:38.922 "data_size": 65536 00:12:38.922 }, 00:12:38.922 { 00:12:38.922 "name": "BaseBdev3", 00:12:38.922 "uuid": "f076c82c-70fb-4372-9de7-4e12d89403ef", 00:12:38.922 "is_configured": true, 00:12:38.922 "data_offset": 0, 00:12:38.922 "data_size": 65536 00:12:38.922 } 00:12:38.922 ] 00:12:38.923 }' 00:12:38.923 18:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.923 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.490 18:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:39.490 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.490 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.490 [2024-12-06 18:11:04.821794] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:39.490 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.490 18:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:39.490 18:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:39.490 18:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:39.490 18:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:39.491 18:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:39.491 18:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:39.491 18:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.491 18:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.491 18:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.491 18:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.491 18:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.491 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.491 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.491 18:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:39.491 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.491 18:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.491 "name": "Existed_Raid", 00:12:39.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.491 "strip_size_kb": 0, 00:12:39.491 "state": "configuring", 00:12:39.491 "raid_level": "raid1", 00:12:39.491 "superblock": false, 00:12:39.491 "num_base_bdevs": 3, 00:12:39.491 "num_base_bdevs_discovered": 1, 00:12:39.491 "num_base_bdevs_operational": 3, 00:12:39.491 "base_bdevs_list": [ 00:12:39.491 { 00:12:39.491 "name": "BaseBdev1", 00:12:39.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.491 "is_configured": false, 00:12:39.491 "data_offset": 0, 00:12:39.491 "data_size": 0 00:12:39.491 }, 00:12:39.491 { 00:12:39.491 "name": null, 00:12:39.491 "uuid": "d69885ac-1cf6-43c9-8a30-53221a575d17", 00:12:39.491 "is_configured": false, 00:12:39.491 "data_offset": 0, 00:12:39.491 "data_size": 65536 00:12:39.491 }, 00:12:39.491 { 00:12:39.491 "name": "BaseBdev3", 00:12:39.491 "uuid": "f076c82c-70fb-4372-9de7-4e12d89403ef", 00:12:39.491 "is_configured": true, 00:12:39.491 "data_offset": 0, 00:12:39.491 "data_size": 65536 00:12:39.491 } 00:12:39.491 ] 00:12:39.491 }' 00:12:39.491 18:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.491 18:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.099 18:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:40.099 18:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.099 18:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.099 18:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.099 18:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.099 18:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:40.099 18:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:40.099 18:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.099 18:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.099 [2024-12-06 18:11:05.403425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:40.099 BaseBdev1 00:12:40.099 18:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.099 18:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:40.099 18:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:40.099 18:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:40.099 18:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:40.099 18:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:40.099 18:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:40.099 18:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:40.099 18:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.099 18:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.099 18:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.099 18:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:40.099 18:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.099 18:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.099 [ 00:12:40.099 { 00:12:40.099 "name": "BaseBdev1", 00:12:40.099 "aliases": [ 00:12:40.099 "fa4c2095-c5a0-4dfb-bd26-5e18ed8efb22" 00:12:40.099 ], 00:12:40.099 "product_name": "Malloc disk", 00:12:40.099 "block_size": 512, 00:12:40.099 "num_blocks": 65536, 00:12:40.099 "uuid": "fa4c2095-c5a0-4dfb-bd26-5e18ed8efb22", 00:12:40.099 "assigned_rate_limits": { 00:12:40.099 "rw_ios_per_sec": 0, 00:12:40.099 "rw_mbytes_per_sec": 0, 00:12:40.099 "r_mbytes_per_sec": 0, 00:12:40.099 "w_mbytes_per_sec": 0 00:12:40.099 }, 00:12:40.099 "claimed": true, 00:12:40.099 "claim_type": "exclusive_write", 00:12:40.099 "zoned": false, 00:12:40.099 "supported_io_types": { 00:12:40.099 "read": true, 00:12:40.099 "write": true, 00:12:40.099 "unmap": true, 00:12:40.099 "flush": true, 00:12:40.099 "reset": true, 00:12:40.099 "nvme_admin": false, 00:12:40.099 "nvme_io": false, 00:12:40.099 "nvme_io_md": false, 00:12:40.099 "write_zeroes": true, 00:12:40.099 "zcopy": true, 00:12:40.099 "get_zone_info": false, 00:12:40.099 "zone_management": false, 00:12:40.099 "zone_append": false, 00:12:40.099 "compare": false, 00:12:40.099 "compare_and_write": false, 00:12:40.099 "abort": true, 00:12:40.099 "seek_hole": false, 00:12:40.099 "seek_data": false, 00:12:40.099 "copy": true, 00:12:40.099 "nvme_iov_md": false 00:12:40.099 }, 00:12:40.099 "memory_domains": [ 00:12:40.099 { 00:12:40.099 "dma_device_id": "system", 00:12:40.099 "dma_device_type": 1 00:12:40.099 }, 00:12:40.099 { 00:12:40.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.099 "dma_device_type": 2 00:12:40.099 } 00:12:40.099 ], 00:12:40.099 "driver_specific": {} 00:12:40.099 } 00:12:40.099 ] 00:12:40.099 18:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.099 18:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:40.099 18:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:40.099 18:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:40.099 18:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:40.099 18:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.099 18:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.099 18:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:40.099 18:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.099 18:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.100 18:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.100 18:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.100 18:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.100 18:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:40.100 18:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.100 18:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.100 18:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.100 18:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.100 "name": "Existed_Raid", 00:12:40.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.100 "strip_size_kb": 0, 00:12:40.100 "state": "configuring", 00:12:40.100 "raid_level": "raid1", 00:12:40.100 "superblock": false, 00:12:40.100 "num_base_bdevs": 3, 00:12:40.100 "num_base_bdevs_discovered": 2, 00:12:40.100 "num_base_bdevs_operational": 3, 00:12:40.100 "base_bdevs_list": [ 00:12:40.100 { 00:12:40.100 "name": "BaseBdev1", 00:12:40.100 "uuid": "fa4c2095-c5a0-4dfb-bd26-5e18ed8efb22", 00:12:40.100 "is_configured": true, 00:12:40.100 "data_offset": 0, 00:12:40.100 "data_size": 65536 00:12:40.100 }, 00:12:40.100 { 00:12:40.100 "name": null, 00:12:40.100 "uuid": "d69885ac-1cf6-43c9-8a30-53221a575d17", 00:12:40.100 "is_configured": false, 00:12:40.100 "data_offset": 0, 00:12:40.100 "data_size": 65536 00:12:40.100 }, 00:12:40.100 { 00:12:40.100 "name": "BaseBdev3", 00:12:40.100 "uuid": "f076c82c-70fb-4372-9de7-4e12d89403ef", 00:12:40.100 "is_configured": true, 00:12:40.100 "data_offset": 0, 00:12:40.100 "data_size": 65536 00:12:40.100 } 00:12:40.100 ] 00:12:40.100 }' 00:12:40.100 18:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.100 18:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.667 18:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.667 18:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.667 18:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:40.667 18:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.667 18:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.667 18:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:40.667 18:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:40.667 18:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.667 18:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.667 [2024-12-06 18:11:06.003624] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:40.667 18:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.667 18:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:40.667 18:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:40.667 18:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:40.667 18:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.667 18:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.667 18:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:40.667 18:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.667 18:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.667 18:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.667 18:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.667 18:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:40.667 18:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.667 18:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.667 18:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.667 18:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.667 18:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.667 "name": "Existed_Raid", 00:12:40.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.667 "strip_size_kb": 0, 00:12:40.667 "state": "configuring", 00:12:40.667 "raid_level": "raid1", 00:12:40.667 "superblock": false, 00:12:40.667 "num_base_bdevs": 3, 00:12:40.667 "num_base_bdevs_discovered": 1, 00:12:40.667 "num_base_bdevs_operational": 3, 00:12:40.667 "base_bdevs_list": [ 00:12:40.667 { 00:12:40.667 "name": "BaseBdev1", 00:12:40.667 "uuid": "fa4c2095-c5a0-4dfb-bd26-5e18ed8efb22", 00:12:40.667 "is_configured": true, 00:12:40.667 "data_offset": 0, 00:12:40.667 "data_size": 65536 00:12:40.667 }, 00:12:40.667 { 00:12:40.667 "name": null, 00:12:40.667 "uuid": "d69885ac-1cf6-43c9-8a30-53221a575d17", 00:12:40.667 "is_configured": false, 00:12:40.667 "data_offset": 0, 00:12:40.667 "data_size": 65536 00:12:40.667 }, 00:12:40.667 { 00:12:40.667 "name": null, 00:12:40.667 "uuid": "f076c82c-70fb-4372-9de7-4e12d89403ef", 00:12:40.667 "is_configured": false, 00:12:40.667 "data_offset": 0, 00:12:40.667 "data_size": 65536 00:12:40.668 } 00:12:40.668 ] 00:12:40.668 }' 00:12:40.668 18:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.668 18:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.234 18:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.234 18:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.234 18:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:41.234 18:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.234 18:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.234 18:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:41.234 18:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:41.234 18:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.234 18:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.234 [2024-12-06 18:11:06.547872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:41.234 18:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.234 18:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:41.234 18:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:41.234 18:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:41.234 18:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:41.234 18:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:41.234 18:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:41.234 18:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.234 18:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.234 18:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.234 18:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.234 18:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.234 18:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:41.234 18:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.234 18:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.234 18:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.234 18:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.234 "name": "Existed_Raid", 00:12:41.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.234 "strip_size_kb": 0, 00:12:41.234 "state": "configuring", 00:12:41.234 "raid_level": "raid1", 00:12:41.234 "superblock": false, 00:12:41.234 "num_base_bdevs": 3, 00:12:41.234 "num_base_bdevs_discovered": 2, 00:12:41.234 "num_base_bdevs_operational": 3, 00:12:41.234 "base_bdevs_list": [ 00:12:41.234 { 00:12:41.234 "name": "BaseBdev1", 00:12:41.234 "uuid": "fa4c2095-c5a0-4dfb-bd26-5e18ed8efb22", 00:12:41.234 "is_configured": true, 00:12:41.234 "data_offset": 0, 00:12:41.234 "data_size": 65536 00:12:41.234 }, 00:12:41.234 { 00:12:41.234 "name": null, 00:12:41.234 "uuid": "d69885ac-1cf6-43c9-8a30-53221a575d17", 00:12:41.234 "is_configured": false, 00:12:41.234 "data_offset": 0, 00:12:41.235 "data_size": 65536 00:12:41.235 }, 00:12:41.235 { 00:12:41.235 "name": "BaseBdev3", 00:12:41.235 "uuid": "f076c82c-70fb-4372-9de7-4e12d89403ef", 00:12:41.235 "is_configured": true, 00:12:41.235 "data_offset": 0, 00:12:41.235 "data_size": 65536 00:12:41.235 } 00:12:41.235 ] 00:12:41.235 }' 00:12:41.235 18:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.235 18:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.803 18:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.803 18:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.803 18:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.803 18:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:41.803 18:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.803 18:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:41.803 18:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:41.803 18:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.803 18:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.803 [2024-12-06 18:11:07.116026] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:41.803 18:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.803 18:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:41.803 18:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:41.803 18:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:41.803 18:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:41.803 18:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:41.803 18:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:41.803 18:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.803 18:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.803 18:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.803 18:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.803 18:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.803 18:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:41.803 18:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.803 18:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.803 18:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.803 18:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.803 "name": "Existed_Raid", 00:12:41.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.803 "strip_size_kb": 0, 00:12:41.803 "state": "configuring", 00:12:41.803 "raid_level": "raid1", 00:12:41.803 "superblock": false, 00:12:41.803 "num_base_bdevs": 3, 00:12:41.803 "num_base_bdevs_discovered": 1, 00:12:41.803 "num_base_bdevs_operational": 3, 00:12:41.803 "base_bdevs_list": [ 00:12:41.803 { 00:12:41.803 "name": null, 00:12:41.803 "uuid": "fa4c2095-c5a0-4dfb-bd26-5e18ed8efb22", 00:12:41.803 "is_configured": false, 00:12:41.803 "data_offset": 0, 00:12:41.803 "data_size": 65536 00:12:41.803 }, 00:12:41.803 { 00:12:41.803 "name": null, 00:12:41.803 "uuid": "d69885ac-1cf6-43c9-8a30-53221a575d17", 00:12:41.803 "is_configured": false, 00:12:41.803 "data_offset": 0, 00:12:41.803 "data_size": 65536 00:12:41.803 }, 00:12:41.803 { 00:12:41.803 "name": "BaseBdev3", 00:12:41.803 "uuid": "f076c82c-70fb-4372-9de7-4e12d89403ef", 00:12:41.803 "is_configured": true, 00:12:41.803 "data_offset": 0, 00:12:41.803 "data_size": 65536 00:12:41.803 } 00:12:41.803 ] 00:12:41.803 }' 00:12:41.803 18:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.803 18:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.371 18:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:42.371 18:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.371 18:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.371 18:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.371 18:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.371 18:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:42.371 18:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:42.371 18:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.371 18:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.371 [2024-12-06 18:11:07.770161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:42.371 18:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.371 18:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:42.371 18:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:42.371 18:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:42.371 18:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:42.371 18:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:42.371 18:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:42.372 18:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.372 18:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.372 18:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.372 18:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.372 18:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.372 18:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.372 18:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.372 18:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:42.372 18:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.372 18:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.372 "name": "Existed_Raid", 00:12:42.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.372 "strip_size_kb": 0, 00:12:42.372 "state": "configuring", 00:12:42.372 "raid_level": "raid1", 00:12:42.372 "superblock": false, 00:12:42.372 "num_base_bdevs": 3, 00:12:42.372 "num_base_bdevs_discovered": 2, 00:12:42.372 "num_base_bdevs_operational": 3, 00:12:42.372 "base_bdevs_list": [ 00:12:42.372 { 00:12:42.372 "name": null, 00:12:42.372 "uuid": "fa4c2095-c5a0-4dfb-bd26-5e18ed8efb22", 00:12:42.372 "is_configured": false, 00:12:42.372 "data_offset": 0, 00:12:42.372 "data_size": 65536 00:12:42.372 }, 00:12:42.372 { 00:12:42.372 "name": "BaseBdev2", 00:12:42.372 "uuid": "d69885ac-1cf6-43c9-8a30-53221a575d17", 00:12:42.372 "is_configured": true, 00:12:42.372 "data_offset": 0, 00:12:42.372 "data_size": 65536 00:12:42.372 }, 00:12:42.372 { 00:12:42.372 "name": "BaseBdev3", 00:12:42.372 "uuid": "f076c82c-70fb-4372-9de7-4e12d89403ef", 00:12:42.372 "is_configured": true, 00:12:42.372 "data_offset": 0, 00:12:42.372 "data_size": 65536 00:12:42.372 } 00:12:42.372 ] 00:12:42.372 }' 00:12:42.372 18:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.372 18:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.939 18:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:42.939 18:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.939 18:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.939 18:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.939 18:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.939 18:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:42.939 18:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.939 18:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.939 18:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.939 18:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:42.939 18:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.939 18:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u fa4c2095-c5a0-4dfb-bd26-5e18ed8efb22 00:12:42.939 18:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.939 18:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.939 [2024-12-06 18:11:08.437081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:42.939 [2024-12-06 18:11:08.437343] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:42.939 [2024-12-06 18:11:08.437366] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:42.939 [2024-12-06 18:11:08.437689] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:42.939 [2024-12-06 18:11:08.437973] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:42.939 [2024-12-06 18:11:08.437996] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:42.939 NewBaseBdev 00:12:42.939 [2024-12-06 18:11:08.438289] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:42.939 18:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.939 18:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:42.939 18:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:42.939 18:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:42.939 18:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:42.939 18:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:42.939 18:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:42.939 18:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:42.940 18:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.940 18:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.940 18:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.940 18:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:42.940 18:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.940 18:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.199 [ 00:12:43.199 { 00:12:43.199 "name": "NewBaseBdev", 00:12:43.199 "aliases": [ 00:12:43.199 "fa4c2095-c5a0-4dfb-bd26-5e18ed8efb22" 00:12:43.199 ], 00:12:43.199 "product_name": "Malloc disk", 00:12:43.199 "block_size": 512, 00:12:43.199 "num_blocks": 65536, 00:12:43.199 "uuid": "fa4c2095-c5a0-4dfb-bd26-5e18ed8efb22", 00:12:43.199 "assigned_rate_limits": { 00:12:43.199 "rw_ios_per_sec": 0, 00:12:43.199 "rw_mbytes_per_sec": 0, 00:12:43.199 "r_mbytes_per_sec": 0, 00:12:43.199 "w_mbytes_per_sec": 0 00:12:43.199 }, 00:12:43.199 "claimed": true, 00:12:43.199 "claim_type": "exclusive_write", 00:12:43.199 "zoned": false, 00:12:43.199 "supported_io_types": { 00:12:43.199 "read": true, 00:12:43.199 "write": true, 00:12:43.199 "unmap": true, 00:12:43.199 "flush": true, 00:12:43.199 "reset": true, 00:12:43.199 "nvme_admin": false, 00:12:43.199 "nvme_io": false, 00:12:43.199 "nvme_io_md": false, 00:12:43.199 "write_zeroes": true, 00:12:43.199 "zcopy": true, 00:12:43.199 "get_zone_info": false, 00:12:43.199 "zone_management": false, 00:12:43.199 "zone_append": false, 00:12:43.199 "compare": false, 00:12:43.199 "compare_and_write": false, 00:12:43.199 "abort": true, 00:12:43.199 "seek_hole": false, 00:12:43.199 "seek_data": false, 00:12:43.199 "copy": true, 00:12:43.199 "nvme_iov_md": false 00:12:43.199 }, 00:12:43.199 "memory_domains": [ 00:12:43.199 { 00:12:43.199 "dma_device_id": "system", 00:12:43.199 "dma_device_type": 1 00:12:43.199 }, 00:12:43.199 { 00:12:43.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.199 "dma_device_type": 2 00:12:43.199 } 00:12:43.199 ], 00:12:43.199 "driver_specific": {} 00:12:43.199 } 00:12:43.199 ] 00:12:43.199 18:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.199 18:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:43.199 18:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:43.199 18:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:43.199 18:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:43.199 18:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.199 18:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.199 18:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:43.199 18:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.199 18:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.199 18:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.199 18:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.199 18:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.199 18:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.199 18:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.199 18:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.199 18:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.199 18:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.199 "name": "Existed_Raid", 00:12:43.199 "uuid": "3f1b2464-7052-4f2c-8d8c-6caa0d5f7b2b", 00:12:43.199 "strip_size_kb": 0, 00:12:43.199 "state": "online", 00:12:43.199 "raid_level": "raid1", 00:12:43.199 "superblock": false, 00:12:43.199 "num_base_bdevs": 3, 00:12:43.199 "num_base_bdevs_discovered": 3, 00:12:43.199 "num_base_bdevs_operational": 3, 00:12:43.199 "base_bdevs_list": [ 00:12:43.199 { 00:12:43.199 "name": "NewBaseBdev", 00:12:43.199 "uuid": "fa4c2095-c5a0-4dfb-bd26-5e18ed8efb22", 00:12:43.199 "is_configured": true, 00:12:43.199 "data_offset": 0, 00:12:43.199 "data_size": 65536 00:12:43.199 }, 00:12:43.199 { 00:12:43.199 "name": "BaseBdev2", 00:12:43.199 "uuid": "d69885ac-1cf6-43c9-8a30-53221a575d17", 00:12:43.199 "is_configured": true, 00:12:43.199 "data_offset": 0, 00:12:43.199 "data_size": 65536 00:12:43.199 }, 00:12:43.199 { 00:12:43.199 "name": "BaseBdev3", 00:12:43.199 "uuid": "f076c82c-70fb-4372-9de7-4e12d89403ef", 00:12:43.199 "is_configured": true, 00:12:43.199 "data_offset": 0, 00:12:43.199 "data_size": 65536 00:12:43.199 } 00:12:43.199 ] 00:12:43.199 }' 00:12:43.199 18:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.199 18:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.768 18:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:43.768 18:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:43.768 18:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:43.768 18:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:43.768 18:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:43.768 18:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:43.768 18:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:43.768 18:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.768 18:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.768 18:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:43.768 [2024-12-06 18:11:08.998605] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:43.768 18:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.768 18:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:43.768 "name": "Existed_Raid", 00:12:43.768 "aliases": [ 00:12:43.768 "3f1b2464-7052-4f2c-8d8c-6caa0d5f7b2b" 00:12:43.768 ], 00:12:43.768 "product_name": "Raid Volume", 00:12:43.768 "block_size": 512, 00:12:43.768 "num_blocks": 65536, 00:12:43.768 "uuid": "3f1b2464-7052-4f2c-8d8c-6caa0d5f7b2b", 00:12:43.768 "assigned_rate_limits": { 00:12:43.768 "rw_ios_per_sec": 0, 00:12:43.768 "rw_mbytes_per_sec": 0, 00:12:43.768 "r_mbytes_per_sec": 0, 00:12:43.768 "w_mbytes_per_sec": 0 00:12:43.768 }, 00:12:43.768 "claimed": false, 00:12:43.768 "zoned": false, 00:12:43.768 "supported_io_types": { 00:12:43.768 "read": true, 00:12:43.768 "write": true, 00:12:43.768 "unmap": false, 00:12:43.768 "flush": false, 00:12:43.769 "reset": true, 00:12:43.769 "nvme_admin": false, 00:12:43.769 "nvme_io": false, 00:12:43.769 "nvme_io_md": false, 00:12:43.769 "write_zeroes": true, 00:12:43.769 "zcopy": false, 00:12:43.769 "get_zone_info": false, 00:12:43.769 "zone_management": false, 00:12:43.769 "zone_append": false, 00:12:43.769 "compare": false, 00:12:43.769 "compare_and_write": false, 00:12:43.769 "abort": false, 00:12:43.769 "seek_hole": false, 00:12:43.769 "seek_data": false, 00:12:43.769 "copy": false, 00:12:43.769 "nvme_iov_md": false 00:12:43.769 }, 00:12:43.769 "memory_domains": [ 00:12:43.769 { 00:12:43.769 "dma_device_id": "system", 00:12:43.769 "dma_device_type": 1 00:12:43.769 }, 00:12:43.769 { 00:12:43.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.769 "dma_device_type": 2 00:12:43.769 }, 00:12:43.769 { 00:12:43.769 "dma_device_id": "system", 00:12:43.769 "dma_device_type": 1 00:12:43.769 }, 00:12:43.769 { 00:12:43.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.769 "dma_device_type": 2 00:12:43.769 }, 00:12:43.769 { 00:12:43.769 "dma_device_id": "system", 00:12:43.769 "dma_device_type": 1 00:12:43.769 }, 00:12:43.769 { 00:12:43.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.769 "dma_device_type": 2 00:12:43.769 } 00:12:43.769 ], 00:12:43.769 "driver_specific": { 00:12:43.769 "raid": { 00:12:43.769 "uuid": "3f1b2464-7052-4f2c-8d8c-6caa0d5f7b2b", 00:12:43.769 "strip_size_kb": 0, 00:12:43.769 "state": "online", 00:12:43.769 "raid_level": "raid1", 00:12:43.769 "superblock": false, 00:12:43.769 "num_base_bdevs": 3, 00:12:43.769 "num_base_bdevs_discovered": 3, 00:12:43.769 "num_base_bdevs_operational": 3, 00:12:43.769 "base_bdevs_list": [ 00:12:43.769 { 00:12:43.769 "name": "NewBaseBdev", 00:12:43.769 "uuid": "fa4c2095-c5a0-4dfb-bd26-5e18ed8efb22", 00:12:43.769 "is_configured": true, 00:12:43.769 "data_offset": 0, 00:12:43.769 "data_size": 65536 00:12:43.769 }, 00:12:43.769 { 00:12:43.769 "name": "BaseBdev2", 00:12:43.769 "uuid": "d69885ac-1cf6-43c9-8a30-53221a575d17", 00:12:43.769 "is_configured": true, 00:12:43.769 "data_offset": 0, 00:12:43.769 "data_size": 65536 00:12:43.769 }, 00:12:43.769 { 00:12:43.769 "name": "BaseBdev3", 00:12:43.769 "uuid": "f076c82c-70fb-4372-9de7-4e12d89403ef", 00:12:43.769 "is_configured": true, 00:12:43.769 "data_offset": 0, 00:12:43.769 "data_size": 65536 00:12:43.769 } 00:12:43.769 ] 00:12:43.769 } 00:12:43.769 } 00:12:43.769 }' 00:12:43.769 18:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:43.769 18:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:43.769 BaseBdev2 00:12:43.769 BaseBdev3' 00:12:43.769 18:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:43.769 18:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:43.769 18:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:43.769 18:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:43.769 18:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:43.769 18:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.769 18:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.769 18:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.769 18:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:43.769 18:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:43.769 18:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:43.769 18:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:43.769 18:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.769 18:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:43.769 18:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.769 18:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.769 18:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:43.769 18:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:43.769 18:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:43.769 18:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:43.769 18:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:43.769 18:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.769 18:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.769 18:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.043 18:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:44.043 18:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:44.043 18:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:44.043 18:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.043 18:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.043 [2024-12-06 18:11:09.310236] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:44.043 [2024-12-06 18:11:09.310421] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:44.043 [2024-12-06 18:11:09.310642] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:44.043 [2024-12-06 18:11:09.311190] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:44.043 [2024-12-06 18:11:09.311219] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:44.043 18:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.043 18:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67508 00:12:44.043 18:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67508 ']' 00:12:44.043 18:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67508 00:12:44.043 18:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:44.043 18:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:44.043 18:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67508 00:12:44.043 killing process with pid 67508 00:12:44.043 18:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:44.043 18:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:44.043 18:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67508' 00:12:44.043 18:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67508 00:12:44.043 [2024-12-06 18:11:09.349342] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:44.043 18:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67508 00:12:44.305 [2024-12-06 18:11:09.628727] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:45.243 18:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:45.243 00:12:45.243 real 0m11.718s 00:12:45.243 user 0m19.456s 00:12:45.243 sys 0m1.581s 00:12:45.243 ************************************ 00:12:45.243 END TEST raid_state_function_test 00:12:45.243 ************************************ 00:12:45.243 18:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:45.243 18:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.243 18:11:10 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:12:45.243 18:11:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:45.243 18:11:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:45.243 18:11:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:45.243 ************************************ 00:12:45.243 START TEST raid_state_function_test_sb 00:12:45.243 ************************************ 00:12:45.244 18:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:12:45.244 18:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:45.244 18:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:45.244 18:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:45.244 18:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:45.244 18:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:45.244 18:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:45.244 18:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:45.244 18:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:45.244 18:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:45.244 18:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:45.244 18:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:45.244 18:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:45.244 18:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:45.244 18:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:45.244 18:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:45.244 18:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:45.244 18:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:45.244 18:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:45.244 18:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:45.244 18:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:45.244 18:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:45.244 18:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:45.244 18:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:45.244 18:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:45.244 18:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:45.244 18:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68140 00:12:45.244 Process raid pid: 68140 00:12:45.244 18:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:45.244 18:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68140' 00:12:45.244 18:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68140 00:12:45.244 18:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68140 ']' 00:12:45.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.244 18:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.244 18:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:45.244 18:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.244 18:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:45.244 18:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.504 [2024-12-06 18:11:10.846887] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:12:45.504 [2024-12-06 18:11:10.847072] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:45.763 [2024-12-06 18:11:11.032697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.763 [2024-12-06 18:11:11.164610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.022 [2024-12-06 18:11:11.368034] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:46.022 [2024-12-06 18:11:11.368092] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:46.282 18:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:46.282 18:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:46.282 18:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:46.282 18:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.282 18:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.282 [2024-12-06 18:11:11.769734] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:46.282 [2024-12-06 18:11:11.769966] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:46.282 [2024-12-06 18:11:11.770097] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:46.282 [2024-12-06 18:11:11.770161] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:46.282 [2024-12-06 18:11:11.770325] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:46.282 [2024-12-06 18:11:11.770393] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:46.282 18:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.282 18:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:46.282 18:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:46.282 18:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:46.282 18:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:46.282 18:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:46.282 18:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:46.282 18:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.282 18:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.282 18:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.282 18:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.282 18:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.282 18:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.282 18:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.282 18:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.282 18:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.541 18:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.541 "name": "Existed_Raid", 00:12:46.541 "uuid": "d23c78ee-3fef-490d-99d1-9a083d8e0d1e", 00:12:46.541 "strip_size_kb": 0, 00:12:46.541 "state": "configuring", 00:12:46.541 "raid_level": "raid1", 00:12:46.541 "superblock": true, 00:12:46.541 "num_base_bdevs": 3, 00:12:46.541 "num_base_bdevs_discovered": 0, 00:12:46.541 "num_base_bdevs_operational": 3, 00:12:46.541 "base_bdevs_list": [ 00:12:46.541 { 00:12:46.541 "name": "BaseBdev1", 00:12:46.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.541 "is_configured": false, 00:12:46.541 "data_offset": 0, 00:12:46.541 "data_size": 0 00:12:46.541 }, 00:12:46.541 { 00:12:46.541 "name": "BaseBdev2", 00:12:46.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.541 "is_configured": false, 00:12:46.541 "data_offset": 0, 00:12:46.541 "data_size": 0 00:12:46.541 }, 00:12:46.541 { 00:12:46.541 "name": "BaseBdev3", 00:12:46.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.541 "is_configured": false, 00:12:46.541 "data_offset": 0, 00:12:46.541 "data_size": 0 00:12:46.541 } 00:12:46.541 ] 00:12:46.541 }' 00:12:46.541 18:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.541 18:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.800 18:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:46.800 18:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.800 18:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.800 [2024-12-06 18:11:12.293801] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:46.800 [2024-12-06 18:11:12.293970] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:46.800 18:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.800 18:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:46.800 18:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.800 18:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.800 [2024-12-06 18:11:12.301793] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:46.800 [2024-12-06 18:11:12.301964] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:46.800 [2024-12-06 18:11:12.302091] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:46.800 [2024-12-06 18:11:12.302150] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:46.800 [2024-12-06 18:11:12.302189] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:46.800 [2024-12-06 18:11:12.302325] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:46.800 18:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.800 18:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:46.800 18:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.800 18:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.059 [2024-12-06 18:11:12.346351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:47.059 BaseBdev1 00:12:47.059 18:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.059 18:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:47.059 18:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:47.059 18:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:47.059 18:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:47.059 18:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:47.059 18:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:47.059 18:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:47.059 18:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.059 18:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.059 18:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.059 18:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:47.059 18:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.059 18:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.059 [ 00:12:47.059 { 00:12:47.059 "name": "BaseBdev1", 00:12:47.059 "aliases": [ 00:12:47.059 "1b6bb48d-73d6-435a-8e64-aecdfc02ad0c" 00:12:47.059 ], 00:12:47.059 "product_name": "Malloc disk", 00:12:47.059 "block_size": 512, 00:12:47.059 "num_blocks": 65536, 00:12:47.059 "uuid": "1b6bb48d-73d6-435a-8e64-aecdfc02ad0c", 00:12:47.059 "assigned_rate_limits": { 00:12:47.059 "rw_ios_per_sec": 0, 00:12:47.059 "rw_mbytes_per_sec": 0, 00:12:47.059 "r_mbytes_per_sec": 0, 00:12:47.059 "w_mbytes_per_sec": 0 00:12:47.059 }, 00:12:47.059 "claimed": true, 00:12:47.059 "claim_type": "exclusive_write", 00:12:47.059 "zoned": false, 00:12:47.059 "supported_io_types": { 00:12:47.059 "read": true, 00:12:47.059 "write": true, 00:12:47.059 "unmap": true, 00:12:47.059 "flush": true, 00:12:47.059 "reset": true, 00:12:47.059 "nvme_admin": false, 00:12:47.059 "nvme_io": false, 00:12:47.059 "nvme_io_md": false, 00:12:47.059 "write_zeroes": true, 00:12:47.059 "zcopy": true, 00:12:47.059 "get_zone_info": false, 00:12:47.059 "zone_management": false, 00:12:47.059 "zone_append": false, 00:12:47.059 "compare": false, 00:12:47.059 "compare_and_write": false, 00:12:47.059 "abort": true, 00:12:47.059 "seek_hole": false, 00:12:47.059 "seek_data": false, 00:12:47.059 "copy": true, 00:12:47.059 "nvme_iov_md": false 00:12:47.059 }, 00:12:47.059 "memory_domains": [ 00:12:47.059 { 00:12:47.059 "dma_device_id": "system", 00:12:47.059 "dma_device_type": 1 00:12:47.059 }, 00:12:47.059 { 00:12:47.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.059 "dma_device_type": 2 00:12:47.059 } 00:12:47.059 ], 00:12:47.059 "driver_specific": {} 00:12:47.059 } 00:12:47.059 ] 00:12:47.059 18:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.059 18:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:47.059 18:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:47.059 18:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:47.059 18:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:47.059 18:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:47.059 18:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:47.059 18:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:47.059 18:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.059 18:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.059 18:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.059 18:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.059 18:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.059 18:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.059 18:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.059 18:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.059 18:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.059 18:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.059 "name": "Existed_Raid", 00:12:47.059 "uuid": "009fa184-4e4e-4973-ad27-d6f15f29fe17", 00:12:47.059 "strip_size_kb": 0, 00:12:47.059 "state": "configuring", 00:12:47.059 "raid_level": "raid1", 00:12:47.059 "superblock": true, 00:12:47.059 "num_base_bdevs": 3, 00:12:47.059 "num_base_bdevs_discovered": 1, 00:12:47.059 "num_base_bdevs_operational": 3, 00:12:47.059 "base_bdevs_list": [ 00:12:47.059 { 00:12:47.059 "name": "BaseBdev1", 00:12:47.059 "uuid": "1b6bb48d-73d6-435a-8e64-aecdfc02ad0c", 00:12:47.059 "is_configured": true, 00:12:47.059 "data_offset": 2048, 00:12:47.059 "data_size": 63488 00:12:47.059 }, 00:12:47.059 { 00:12:47.059 "name": "BaseBdev2", 00:12:47.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.059 "is_configured": false, 00:12:47.059 "data_offset": 0, 00:12:47.059 "data_size": 0 00:12:47.059 }, 00:12:47.059 { 00:12:47.059 "name": "BaseBdev3", 00:12:47.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.059 "is_configured": false, 00:12:47.059 "data_offset": 0, 00:12:47.059 "data_size": 0 00:12:47.059 } 00:12:47.059 ] 00:12:47.059 }' 00:12:47.059 18:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.059 18:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.628 18:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:47.628 18:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.628 18:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.628 [2024-12-06 18:11:12.910575] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:47.628 [2024-12-06 18:11:12.910635] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:47.628 18:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.628 18:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:47.628 18:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.628 18:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.628 [2024-12-06 18:11:12.918630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:47.628 [2024-12-06 18:11:12.921207] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:47.628 [2024-12-06 18:11:12.921388] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:47.628 [2024-12-06 18:11:12.921550] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:47.628 [2024-12-06 18:11:12.921620] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:47.628 18:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.628 18:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:47.628 18:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:47.628 18:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:47.628 18:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:47.628 18:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:47.628 18:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:47.628 18:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:47.629 18:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:47.629 18:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.629 18:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.629 18:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.629 18:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.629 18:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.629 18:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.629 18:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.629 18:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.629 18:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.629 18:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.629 "name": "Existed_Raid", 00:12:47.629 "uuid": "320f6e2a-1423-4758-b7c6-9dff5e5c2860", 00:12:47.629 "strip_size_kb": 0, 00:12:47.629 "state": "configuring", 00:12:47.629 "raid_level": "raid1", 00:12:47.629 "superblock": true, 00:12:47.629 "num_base_bdevs": 3, 00:12:47.629 "num_base_bdevs_discovered": 1, 00:12:47.629 "num_base_bdevs_operational": 3, 00:12:47.629 "base_bdevs_list": [ 00:12:47.629 { 00:12:47.629 "name": "BaseBdev1", 00:12:47.629 "uuid": "1b6bb48d-73d6-435a-8e64-aecdfc02ad0c", 00:12:47.629 "is_configured": true, 00:12:47.629 "data_offset": 2048, 00:12:47.629 "data_size": 63488 00:12:47.629 }, 00:12:47.629 { 00:12:47.629 "name": "BaseBdev2", 00:12:47.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.629 "is_configured": false, 00:12:47.629 "data_offset": 0, 00:12:47.629 "data_size": 0 00:12:47.629 }, 00:12:47.629 { 00:12:47.629 "name": "BaseBdev3", 00:12:47.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.629 "is_configured": false, 00:12:47.629 "data_offset": 0, 00:12:47.629 "data_size": 0 00:12:47.629 } 00:12:47.629 ] 00:12:47.629 }' 00:12:47.629 18:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.629 18:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.196 18:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:48.196 18:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.196 18:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.196 [2024-12-06 18:11:13.488675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:48.196 BaseBdev2 00:12:48.196 18:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.196 18:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:48.196 18:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:48.196 18:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:48.196 18:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:48.196 18:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:48.196 18:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:48.196 18:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:48.196 18:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.196 18:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.196 18:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.196 18:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:48.196 18:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.196 18:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.196 [ 00:12:48.196 { 00:12:48.196 "name": "BaseBdev2", 00:12:48.196 "aliases": [ 00:12:48.197 "383d68be-8660-4e4d-9143-5adaa6239dfc" 00:12:48.197 ], 00:12:48.197 "product_name": "Malloc disk", 00:12:48.197 "block_size": 512, 00:12:48.197 "num_blocks": 65536, 00:12:48.197 "uuid": "383d68be-8660-4e4d-9143-5adaa6239dfc", 00:12:48.197 "assigned_rate_limits": { 00:12:48.197 "rw_ios_per_sec": 0, 00:12:48.197 "rw_mbytes_per_sec": 0, 00:12:48.197 "r_mbytes_per_sec": 0, 00:12:48.197 "w_mbytes_per_sec": 0 00:12:48.197 }, 00:12:48.197 "claimed": true, 00:12:48.197 "claim_type": "exclusive_write", 00:12:48.197 "zoned": false, 00:12:48.197 "supported_io_types": { 00:12:48.197 "read": true, 00:12:48.197 "write": true, 00:12:48.197 "unmap": true, 00:12:48.197 "flush": true, 00:12:48.197 "reset": true, 00:12:48.197 "nvme_admin": false, 00:12:48.197 "nvme_io": false, 00:12:48.197 "nvme_io_md": false, 00:12:48.197 "write_zeroes": true, 00:12:48.197 "zcopy": true, 00:12:48.197 "get_zone_info": false, 00:12:48.197 "zone_management": false, 00:12:48.197 "zone_append": false, 00:12:48.197 "compare": false, 00:12:48.197 "compare_and_write": false, 00:12:48.197 "abort": true, 00:12:48.197 "seek_hole": false, 00:12:48.197 "seek_data": false, 00:12:48.197 "copy": true, 00:12:48.197 "nvme_iov_md": false 00:12:48.197 }, 00:12:48.197 "memory_domains": [ 00:12:48.197 { 00:12:48.197 "dma_device_id": "system", 00:12:48.197 "dma_device_type": 1 00:12:48.197 }, 00:12:48.197 { 00:12:48.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.197 "dma_device_type": 2 00:12:48.197 } 00:12:48.197 ], 00:12:48.197 "driver_specific": {} 00:12:48.197 } 00:12:48.197 ] 00:12:48.197 18:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.197 18:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:48.197 18:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:48.197 18:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:48.197 18:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:48.197 18:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:48.197 18:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:48.197 18:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:48.197 18:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:48.197 18:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:48.197 18:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.197 18:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.197 18:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.197 18:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.197 18:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.197 18:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.197 18:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.197 18:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.197 18:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.197 18:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.197 "name": "Existed_Raid", 00:12:48.197 "uuid": "320f6e2a-1423-4758-b7c6-9dff5e5c2860", 00:12:48.197 "strip_size_kb": 0, 00:12:48.197 "state": "configuring", 00:12:48.197 "raid_level": "raid1", 00:12:48.197 "superblock": true, 00:12:48.197 "num_base_bdevs": 3, 00:12:48.197 "num_base_bdevs_discovered": 2, 00:12:48.197 "num_base_bdevs_operational": 3, 00:12:48.197 "base_bdevs_list": [ 00:12:48.197 { 00:12:48.197 "name": "BaseBdev1", 00:12:48.197 "uuid": "1b6bb48d-73d6-435a-8e64-aecdfc02ad0c", 00:12:48.197 "is_configured": true, 00:12:48.197 "data_offset": 2048, 00:12:48.197 "data_size": 63488 00:12:48.197 }, 00:12:48.197 { 00:12:48.197 "name": "BaseBdev2", 00:12:48.197 "uuid": "383d68be-8660-4e4d-9143-5adaa6239dfc", 00:12:48.197 "is_configured": true, 00:12:48.197 "data_offset": 2048, 00:12:48.197 "data_size": 63488 00:12:48.197 }, 00:12:48.197 { 00:12:48.197 "name": "BaseBdev3", 00:12:48.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.197 "is_configured": false, 00:12:48.197 "data_offset": 0, 00:12:48.197 "data_size": 0 00:12:48.197 } 00:12:48.197 ] 00:12:48.197 }' 00:12:48.197 18:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.197 18:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.787 18:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:48.787 18:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.787 18:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.787 [2024-12-06 18:11:14.090121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:48.787 BaseBdev3 00:12:48.787 [2024-12-06 18:11:14.090657] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:48.787 [2024-12-06 18:11:14.090704] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:48.787 [2024-12-06 18:11:14.091088] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:48.787 [2024-12-06 18:11:14.091313] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:48.787 [2024-12-06 18:11:14.091330] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:48.787 [2024-12-06 18:11:14.091505] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:48.787 18:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.787 18:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:48.787 18:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:48.787 18:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:48.787 18:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:48.787 18:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:48.787 18:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:48.787 18:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:48.787 18:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.787 18:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.787 18:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.787 18:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:48.787 18:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.787 18:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.787 [ 00:12:48.787 { 00:12:48.787 "name": "BaseBdev3", 00:12:48.787 "aliases": [ 00:12:48.787 "c301dc59-2b18-4143-a876-bf595c6ba890" 00:12:48.787 ], 00:12:48.787 "product_name": "Malloc disk", 00:12:48.787 "block_size": 512, 00:12:48.787 "num_blocks": 65536, 00:12:48.787 "uuid": "c301dc59-2b18-4143-a876-bf595c6ba890", 00:12:48.787 "assigned_rate_limits": { 00:12:48.787 "rw_ios_per_sec": 0, 00:12:48.787 "rw_mbytes_per_sec": 0, 00:12:48.787 "r_mbytes_per_sec": 0, 00:12:48.787 "w_mbytes_per_sec": 0 00:12:48.787 }, 00:12:48.788 "claimed": true, 00:12:48.788 "claim_type": "exclusive_write", 00:12:48.788 "zoned": false, 00:12:48.788 "supported_io_types": { 00:12:48.788 "read": true, 00:12:48.788 "write": true, 00:12:48.788 "unmap": true, 00:12:48.788 "flush": true, 00:12:48.788 "reset": true, 00:12:48.788 "nvme_admin": false, 00:12:48.788 "nvme_io": false, 00:12:48.788 "nvme_io_md": false, 00:12:48.788 "write_zeroes": true, 00:12:48.788 "zcopy": true, 00:12:48.788 "get_zone_info": false, 00:12:48.788 "zone_management": false, 00:12:48.788 "zone_append": false, 00:12:48.788 "compare": false, 00:12:48.788 "compare_and_write": false, 00:12:48.788 "abort": true, 00:12:48.788 "seek_hole": false, 00:12:48.788 "seek_data": false, 00:12:48.788 "copy": true, 00:12:48.788 "nvme_iov_md": false 00:12:48.788 }, 00:12:48.788 "memory_domains": [ 00:12:48.788 { 00:12:48.788 "dma_device_id": "system", 00:12:48.788 "dma_device_type": 1 00:12:48.788 }, 00:12:48.788 { 00:12:48.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.788 "dma_device_type": 2 00:12:48.788 } 00:12:48.788 ], 00:12:48.788 "driver_specific": {} 00:12:48.788 } 00:12:48.788 ] 00:12:48.788 18:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.788 18:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:48.788 18:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:48.788 18:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:48.788 18:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:48.788 18:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:48.788 18:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:48.788 18:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:48.788 18:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:48.788 18:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:48.788 18:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.788 18:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.788 18:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.788 18:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.788 18:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.788 18:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.788 18:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.788 18:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.788 18:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.788 18:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.788 "name": "Existed_Raid", 00:12:48.788 "uuid": "320f6e2a-1423-4758-b7c6-9dff5e5c2860", 00:12:48.788 "strip_size_kb": 0, 00:12:48.788 "state": "online", 00:12:48.788 "raid_level": "raid1", 00:12:48.788 "superblock": true, 00:12:48.788 "num_base_bdevs": 3, 00:12:48.788 "num_base_bdevs_discovered": 3, 00:12:48.788 "num_base_bdevs_operational": 3, 00:12:48.788 "base_bdevs_list": [ 00:12:48.788 { 00:12:48.788 "name": "BaseBdev1", 00:12:48.788 "uuid": "1b6bb48d-73d6-435a-8e64-aecdfc02ad0c", 00:12:48.788 "is_configured": true, 00:12:48.788 "data_offset": 2048, 00:12:48.788 "data_size": 63488 00:12:48.788 }, 00:12:48.788 { 00:12:48.788 "name": "BaseBdev2", 00:12:48.788 "uuid": "383d68be-8660-4e4d-9143-5adaa6239dfc", 00:12:48.788 "is_configured": true, 00:12:48.788 "data_offset": 2048, 00:12:48.788 "data_size": 63488 00:12:48.788 }, 00:12:48.788 { 00:12:48.788 "name": "BaseBdev3", 00:12:48.788 "uuid": "c301dc59-2b18-4143-a876-bf595c6ba890", 00:12:48.788 "is_configured": true, 00:12:48.788 "data_offset": 2048, 00:12:48.788 "data_size": 63488 00:12:48.788 } 00:12:48.788 ] 00:12:48.788 }' 00:12:48.788 18:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.788 18:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.357 18:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:49.357 18:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:49.357 18:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:49.357 18:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:49.357 18:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:49.357 18:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:49.357 18:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:49.357 18:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:49.357 18:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.357 18:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.357 [2024-12-06 18:11:14.638793] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:49.357 18:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.357 18:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:49.357 "name": "Existed_Raid", 00:12:49.357 "aliases": [ 00:12:49.357 "320f6e2a-1423-4758-b7c6-9dff5e5c2860" 00:12:49.357 ], 00:12:49.357 "product_name": "Raid Volume", 00:12:49.357 "block_size": 512, 00:12:49.357 "num_blocks": 63488, 00:12:49.357 "uuid": "320f6e2a-1423-4758-b7c6-9dff5e5c2860", 00:12:49.357 "assigned_rate_limits": { 00:12:49.357 "rw_ios_per_sec": 0, 00:12:49.357 "rw_mbytes_per_sec": 0, 00:12:49.357 "r_mbytes_per_sec": 0, 00:12:49.357 "w_mbytes_per_sec": 0 00:12:49.357 }, 00:12:49.357 "claimed": false, 00:12:49.357 "zoned": false, 00:12:49.357 "supported_io_types": { 00:12:49.357 "read": true, 00:12:49.357 "write": true, 00:12:49.357 "unmap": false, 00:12:49.357 "flush": false, 00:12:49.357 "reset": true, 00:12:49.357 "nvme_admin": false, 00:12:49.357 "nvme_io": false, 00:12:49.357 "nvme_io_md": false, 00:12:49.357 "write_zeroes": true, 00:12:49.357 "zcopy": false, 00:12:49.357 "get_zone_info": false, 00:12:49.357 "zone_management": false, 00:12:49.357 "zone_append": false, 00:12:49.357 "compare": false, 00:12:49.357 "compare_and_write": false, 00:12:49.357 "abort": false, 00:12:49.357 "seek_hole": false, 00:12:49.357 "seek_data": false, 00:12:49.357 "copy": false, 00:12:49.357 "nvme_iov_md": false 00:12:49.357 }, 00:12:49.357 "memory_domains": [ 00:12:49.357 { 00:12:49.357 "dma_device_id": "system", 00:12:49.357 "dma_device_type": 1 00:12:49.357 }, 00:12:49.357 { 00:12:49.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.357 "dma_device_type": 2 00:12:49.357 }, 00:12:49.357 { 00:12:49.357 "dma_device_id": "system", 00:12:49.357 "dma_device_type": 1 00:12:49.357 }, 00:12:49.357 { 00:12:49.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.357 "dma_device_type": 2 00:12:49.357 }, 00:12:49.357 { 00:12:49.357 "dma_device_id": "system", 00:12:49.357 "dma_device_type": 1 00:12:49.357 }, 00:12:49.357 { 00:12:49.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.357 "dma_device_type": 2 00:12:49.357 } 00:12:49.357 ], 00:12:49.357 "driver_specific": { 00:12:49.357 "raid": { 00:12:49.357 "uuid": "320f6e2a-1423-4758-b7c6-9dff5e5c2860", 00:12:49.357 "strip_size_kb": 0, 00:12:49.357 "state": "online", 00:12:49.357 "raid_level": "raid1", 00:12:49.357 "superblock": true, 00:12:49.357 "num_base_bdevs": 3, 00:12:49.357 "num_base_bdevs_discovered": 3, 00:12:49.357 "num_base_bdevs_operational": 3, 00:12:49.357 "base_bdevs_list": [ 00:12:49.357 { 00:12:49.357 "name": "BaseBdev1", 00:12:49.357 "uuid": "1b6bb48d-73d6-435a-8e64-aecdfc02ad0c", 00:12:49.357 "is_configured": true, 00:12:49.357 "data_offset": 2048, 00:12:49.357 "data_size": 63488 00:12:49.357 }, 00:12:49.357 { 00:12:49.357 "name": "BaseBdev2", 00:12:49.357 "uuid": "383d68be-8660-4e4d-9143-5adaa6239dfc", 00:12:49.357 "is_configured": true, 00:12:49.357 "data_offset": 2048, 00:12:49.357 "data_size": 63488 00:12:49.357 }, 00:12:49.357 { 00:12:49.357 "name": "BaseBdev3", 00:12:49.357 "uuid": "c301dc59-2b18-4143-a876-bf595c6ba890", 00:12:49.357 "is_configured": true, 00:12:49.357 "data_offset": 2048, 00:12:49.357 "data_size": 63488 00:12:49.357 } 00:12:49.357 ] 00:12:49.357 } 00:12:49.357 } 00:12:49.357 }' 00:12:49.357 18:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:49.357 18:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:49.357 BaseBdev2 00:12:49.357 BaseBdev3' 00:12:49.357 18:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:49.357 18:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:49.357 18:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:49.357 18:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:49.357 18:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:49.357 18:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.357 18:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.357 18:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.357 18:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:49.357 18:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:49.357 18:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:49.357 18:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:49.357 18:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:49.357 18:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.357 18:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.357 18:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.617 18:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:49.617 18:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:49.617 18:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:49.617 18:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:49.617 18:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.617 18:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.617 18:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:49.617 18:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.617 18:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:49.617 18:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:49.617 18:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:49.617 18:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.617 18:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.617 [2024-12-06 18:11:14.938474] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:49.617 18:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.617 18:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:49.617 18:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:49.617 18:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:49.617 18:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:12:49.617 18:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:49.617 18:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:12:49.617 18:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.617 18:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:49.617 18:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:49.617 18:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:49.617 18:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:49.617 18:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.617 18:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.617 18:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.617 18:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.617 18:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.617 18:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.617 18:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.617 18:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.617 18:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.617 18:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.617 "name": "Existed_Raid", 00:12:49.617 "uuid": "320f6e2a-1423-4758-b7c6-9dff5e5c2860", 00:12:49.617 "strip_size_kb": 0, 00:12:49.617 "state": "online", 00:12:49.617 "raid_level": "raid1", 00:12:49.617 "superblock": true, 00:12:49.617 "num_base_bdevs": 3, 00:12:49.617 "num_base_bdevs_discovered": 2, 00:12:49.617 "num_base_bdevs_operational": 2, 00:12:49.617 "base_bdevs_list": [ 00:12:49.617 { 00:12:49.617 "name": null, 00:12:49.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.617 "is_configured": false, 00:12:49.617 "data_offset": 0, 00:12:49.617 "data_size": 63488 00:12:49.617 }, 00:12:49.617 { 00:12:49.617 "name": "BaseBdev2", 00:12:49.617 "uuid": "383d68be-8660-4e4d-9143-5adaa6239dfc", 00:12:49.617 "is_configured": true, 00:12:49.617 "data_offset": 2048, 00:12:49.617 "data_size": 63488 00:12:49.617 }, 00:12:49.617 { 00:12:49.617 "name": "BaseBdev3", 00:12:49.617 "uuid": "c301dc59-2b18-4143-a876-bf595c6ba890", 00:12:49.617 "is_configured": true, 00:12:49.617 "data_offset": 2048, 00:12:49.617 "data_size": 63488 00:12:49.617 } 00:12:49.617 ] 00:12:49.617 }' 00:12:49.617 18:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.617 18:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.183 18:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:50.183 18:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:50.183 18:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.183 18:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:50.184 18:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.184 18:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.184 18:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.184 18:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:50.184 18:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:50.184 18:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:50.184 18:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.184 18:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.184 [2024-12-06 18:11:15.614892] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:50.184 18:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.184 18:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:50.184 18:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:50.442 18:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.442 18:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.442 18:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.442 18:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:50.442 18:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.442 18:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:50.442 18:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:50.442 18:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:50.442 18:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.442 18:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.442 [2024-12-06 18:11:15.750390] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:50.442 [2024-12-06 18:11:15.750680] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:50.442 [2024-12-06 18:11:15.835203] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:50.442 [2024-12-06 18:11:15.835273] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:50.442 [2024-12-06 18:11:15.835293] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:50.442 18:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.442 18:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:50.442 18:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:50.442 18:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.442 18:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:50.442 18:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.442 18:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.442 18:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.442 18:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:50.442 18:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:50.442 18:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:50.442 18:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:50.442 18:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:50.442 18:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:50.442 18:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.442 18:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.442 BaseBdev2 00:12:50.442 18:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.442 18:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:50.442 18:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:50.442 18:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:50.442 18:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:50.442 18:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:50.442 18:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:50.442 18:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:50.442 18:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.442 18:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.442 18:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.442 18:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:50.442 18:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.442 18:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.442 [ 00:12:50.442 { 00:12:50.442 "name": "BaseBdev2", 00:12:50.442 "aliases": [ 00:12:50.442 "ab9ea4f1-0fb7-4f21-95e5-580df54cc560" 00:12:50.442 ], 00:12:50.442 "product_name": "Malloc disk", 00:12:50.442 "block_size": 512, 00:12:50.442 "num_blocks": 65536, 00:12:50.442 "uuid": "ab9ea4f1-0fb7-4f21-95e5-580df54cc560", 00:12:50.442 "assigned_rate_limits": { 00:12:50.442 "rw_ios_per_sec": 0, 00:12:50.442 "rw_mbytes_per_sec": 0, 00:12:50.442 "r_mbytes_per_sec": 0, 00:12:50.442 "w_mbytes_per_sec": 0 00:12:50.442 }, 00:12:50.442 "claimed": false, 00:12:50.442 "zoned": false, 00:12:50.442 "supported_io_types": { 00:12:50.442 "read": true, 00:12:50.442 "write": true, 00:12:50.442 "unmap": true, 00:12:50.442 "flush": true, 00:12:50.442 "reset": true, 00:12:50.442 "nvme_admin": false, 00:12:50.442 "nvme_io": false, 00:12:50.442 "nvme_io_md": false, 00:12:50.442 "write_zeroes": true, 00:12:50.442 "zcopy": true, 00:12:50.442 "get_zone_info": false, 00:12:50.442 "zone_management": false, 00:12:50.442 "zone_append": false, 00:12:50.442 "compare": false, 00:12:50.442 "compare_and_write": false, 00:12:50.442 "abort": true, 00:12:50.442 "seek_hole": false, 00:12:50.442 "seek_data": false, 00:12:50.442 "copy": true, 00:12:50.442 "nvme_iov_md": false 00:12:50.442 }, 00:12:50.442 "memory_domains": [ 00:12:50.442 { 00:12:50.442 "dma_device_id": "system", 00:12:50.442 "dma_device_type": 1 00:12:50.442 }, 00:12:50.442 { 00:12:50.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.442 "dma_device_type": 2 00:12:50.442 } 00:12:50.442 ], 00:12:50.442 "driver_specific": {} 00:12:50.442 } 00:12:50.442 ] 00:12:50.442 18:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.442 18:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:50.443 18:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:50.443 18:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:50.443 18:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:50.443 18:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.443 18:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.701 BaseBdev3 00:12:50.701 18:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.701 18:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:50.701 18:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:50.701 18:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:50.701 18:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:50.701 18:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:50.701 18:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:50.701 18:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:50.701 18:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.701 18:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.701 18:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.701 18:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:50.701 18:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.701 18:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.701 [ 00:12:50.701 { 00:12:50.701 "name": "BaseBdev3", 00:12:50.701 "aliases": [ 00:12:50.701 "27229303-86d8-484d-be0d-39f8bf2449de" 00:12:50.701 ], 00:12:50.701 "product_name": "Malloc disk", 00:12:50.701 "block_size": 512, 00:12:50.701 "num_blocks": 65536, 00:12:50.701 "uuid": "27229303-86d8-484d-be0d-39f8bf2449de", 00:12:50.701 "assigned_rate_limits": { 00:12:50.701 "rw_ios_per_sec": 0, 00:12:50.701 "rw_mbytes_per_sec": 0, 00:12:50.701 "r_mbytes_per_sec": 0, 00:12:50.701 "w_mbytes_per_sec": 0 00:12:50.701 }, 00:12:50.701 "claimed": false, 00:12:50.701 "zoned": false, 00:12:50.701 "supported_io_types": { 00:12:50.701 "read": true, 00:12:50.701 "write": true, 00:12:50.701 "unmap": true, 00:12:50.701 "flush": true, 00:12:50.701 "reset": true, 00:12:50.701 "nvme_admin": false, 00:12:50.701 "nvme_io": false, 00:12:50.701 "nvme_io_md": false, 00:12:50.701 "write_zeroes": true, 00:12:50.701 "zcopy": true, 00:12:50.701 "get_zone_info": false, 00:12:50.701 "zone_management": false, 00:12:50.701 "zone_append": false, 00:12:50.701 "compare": false, 00:12:50.701 "compare_and_write": false, 00:12:50.701 "abort": true, 00:12:50.701 "seek_hole": false, 00:12:50.701 "seek_data": false, 00:12:50.701 "copy": true, 00:12:50.701 "nvme_iov_md": false 00:12:50.701 }, 00:12:50.701 "memory_domains": [ 00:12:50.701 { 00:12:50.701 "dma_device_id": "system", 00:12:50.701 "dma_device_type": 1 00:12:50.701 }, 00:12:50.701 { 00:12:50.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.701 "dma_device_type": 2 00:12:50.701 } 00:12:50.701 ], 00:12:50.701 "driver_specific": {} 00:12:50.701 } 00:12:50.701 ] 00:12:50.701 18:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.701 18:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:50.701 18:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:50.701 18:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:50.701 18:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:50.701 18:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.701 18:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.701 [2024-12-06 18:11:16.030331] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:50.701 [2024-12-06 18:11:16.030390] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:50.701 [2024-12-06 18:11:16.030420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:50.701 [2024-12-06 18:11:16.032837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:50.701 18:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.701 18:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:50.701 18:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.701 18:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:50.701 18:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.701 18:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.701 18:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:50.701 18:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.701 18:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.701 18:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.701 18:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.701 18:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.701 18:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.701 18:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.701 18:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.701 18:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.701 18:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.701 "name": "Existed_Raid", 00:12:50.701 "uuid": "1ff94bf9-fbf4-4cec-8a28-3822b9c0bd31", 00:12:50.701 "strip_size_kb": 0, 00:12:50.701 "state": "configuring", 00:12:50.701 "raid_level": "raid1", 00:12:50.701 "superblock": true, 00:12:50.701 "num_base_bdevs": 3, 00:12:50.701 "num_base_bdevs_discovered": 2, 00:12:50.701 "num_base_bdevs_operational": 3, 00:12:50.701 "base_bdevs_list": [ 00:12:50.701 { 00:12:50.701 "name": "BaseBdev1", 00:12:50.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.701 "is_configured": false, 00:12:50.701 "data_offset": 0, 00:12:50.701 "data_size": 0 00:12:50.701 }, 00:12:50.701 { 00:12:50.701 "name": "BaseBdev2", 00:12:50.701 "uuid": "ab9ea4f1-0fb7-4f21-95e5-580df54cc560", 00:12:50.701 "is_configured": true, 00:12:50.701 "data_offset": 2048, 00:12:50.701 "data_size": 63488 00:12:50.701 }, 00:12:50.701 { 00:12:50.701 "name": "BaseBdev3", 00:12:50.701 "uuid": "27229303-86d8-484d-be0d-39f8bf2449de", 00:12:50.701 "is_configured": true, 00:12:50.701 "data_offset": 2048, 00:12:50.701 "data_size": 63488 00:12:50.701 } 00:12:50.701 ] 00:12:50.702 }' 00:12:50.702 18:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.702 18:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.268 18:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:51.268 18:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.268 18:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.268 [2024-12-06 18:11:16.534479] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:51.268 18:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.268 18:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:51.268 18:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:51.268 18:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:51.268 18:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:51.268 18:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:51.268 18:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:51.268 18:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.268 18:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.268 18:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.268 18:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.268 18:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.268 18:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.268 18:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.268 18:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.268 18:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.268 18:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.268 "name": "Existed_Raid", 00:12:51.268 "uuid": "1ff94bf9-fbf4-4cec-8a28-3822b9c0bd31", 00:12:51.268 "strip_size_kb": 0, 00:12:51.268 "state": "configuring", 00:12:51.268 "raid_level": "raid1", 00:12:51.268 "superblock": true, 00:12:51.268 "num_base_bdevs": 3, 00:12:51.268 "num_base_bdevs_discovered": 1, 00:12:51.268 "num_base_bdevs_operational": 3, 00:12:51.268 "base_bdevs_list": [ 00:12:51.268 { 00:12:51.268 "name": "BaseBdev1", 00:12:51.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.268 "is_configured": false, 00:12:51.268 "data_offset": 0, 00:12:51.268 "data_size": 0 00:12:51.268 }, 00:12:51.268 { 00:12:51.268 "name": null, 00:12:51.268 "uuid": "ab9ea4f1-0fb7-4f21-95e5-580df54cc560", 00:12:51.268 "is_configured": false, 00:12:51.268 "data_offset": 0, 00:12:51.268 "data_size": 63488 00:12:51.268 }, 00:12:51.268 { 00:12:51.268 "name": "BaseBdev3", 00:12:51.268 "uuid": "27229303-86d8-484d-be0d-39f8bf2449de", 00:12:51.268 "is_configured": true, 00:12:51.268 "data_offset": 2048, 00:12:51.268 "data_size": 63488 00:12:51.268 } 00:12:51.268 ] 00:12:51.268 }' 00:12:51.268 18:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.268 18:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.527 18:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.527 18:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.527 18:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.527 18:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:51.786 18:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.786 18:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:51.786 18:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:51.786 18:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.786 18:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.786 [2024-12-06 18:11:17.157176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:51.786 BaseBdev1 00:12:51.786 18:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.786 18:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:51.786 18:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:51.786 18:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:51.786 18:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:51.786 18:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:51.786 18:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:51.786 18:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:51.786 18:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.786 18:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.786 18:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.786 18:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:51.786 18:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.786 18:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.786 [ 00:12:51.786 { 00:12:51.786 "name": "BaseBdev1", 00:12:51.786 "aliases": [ 00:12:51.786 "c2d9d638-5c28-44cd-a815-3c415ac6b6bb" 00:12:51.786 ], 00:12:51.786 "product_name": "Malloc disk", 00:12:51.786 "block_size": 512, 00:12:51.786 "num_blocks": 65536, 00:12:51.786 "uuid": "c2d9d638-5c28-44cd-a815-3c415ac6b6bb", 00:12:51.786 "assigned_rate_limits": { 00:12:51.786 "rw_ios_per_sec": 0, 00:12:51.786 "rw_mbytes_per_sec": 0, 00:12:51.786 "r_mbytes_per_sec": 0, 00:12:51.786 "w_mbytes_per_sec": 0 00:12:51.786 }, 00:12:51.786 "claimed": true, 00:12:51.786 "claim_type": "exclusive_write", 00:12:51.786 "zoned": false, 00:12:51.786 "supported_io_types": { 00:12:51.786 "read": true, 00:12:51.786 "write": true, 00:12:51.786 "unmap": true, 00:12:51.786 "flush": true, 00:12:51.786 "reset": true, 00:12:51.786 "nvme_admin": false, 00:12:51.786 "nvme_io": false, 00:12:51.786 "nvme_io_md": false, 00:12:51.786 "write_zeroes": true, 00:12:51.786 "zcopy": true, 00:12:51.786 "get_zone_info": false, 00:12:51.786 "zone_management": false, 00:12:51.786 "zone_append": false, 00:12:51.786 "compare": false, 00:12:51.786 "compare_and_write": false, 00:12:51.786 "abort": true, 00:12:51.786 "seek_hole": false, 00:12:51.786 "seek_data": false, 00:12:51.786 "copy": true, 00:12:51.786 "nvme_iov_md": false 00:12:51.786 }, 00:12:51.786 "memory_domains": [ 00:12:51.786 { 00:12:51.786 "dma_device_id": "system", 00:12:51.786 "dma_device_type": 1 00:12:51.786 }, 00:12:51.786 { 00:12:51.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.786 "dma_device_type": 2 00:12:51.786 } 00:12:51.786 ], 00:12:51.786 "driver_specific": {} 00:12:51.786 } 00:12:51.786 ] 00:12:51.786 18:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.786 18:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:51.786 18:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:51.786 18:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:51.786 18:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:51.786 18:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:51.786 18:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:51.786 18:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:51.786 18:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.786 18:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.786 18:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.786 18:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.786 18:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.786 18:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.786 18:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.786 18:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.786 18:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.786 18:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.786 "name": "Existed_Raid", 00:12:51.786 "uuid": "1ff94bf9-fbf4-4cec-8a28-3822b9c0bd31", 00:12:51.786 "strip_size_kb": 0, 00:12:51.786 "state": "configuring", 00:12:51.786 "raid_level": "raid1", 00:12:51.786 "superblock": true, 00:12:51.786 "num_base_bdevs": 3, 00:12:51.786 "num_base_bdevs_discovered": 2, 00:12:51.786 "num_base_bdevs_operational": 3, 00:12:51.786 "base_bdevs_list": [ 00:12:51.786 { 00:12:51.786 "name": "BaseBdev1", 00:12:51.786 "uuid": "c2d9d638-5c28-44cd-a815-3c415ac6b6bb", 00:12:51.786 "is_configured": true, 00:12:51.786 "data_offset": 2048, 00:12:51.786 "data_size": 63488 00:12:51.786 }, 00:12:51.786 { 00:12:51.786 "name": null, 00:12:51.786 "uuid": "ab9ea4f1-0fb7-4f21-95e5-580df54cc560", 00:12:51.786 "is_configured": false, 00:12:51.786 "data_offset": 0, 00:12:51.786 "data_size": 63488 00:12:51.786 }, 00:12:51.786 { 00:12:51.786 "name": "BaseBdev3", 00:12:51.786 "uuid": "27229303-86d8-484d-be0d-39f8bf2449de", 00:12:51.786 "is_configured": true, 00:12:51.786 "data_offset": 2048, 00:12:51.786 "data_size": 63488 00:12:51.786 } 00:12:51.786 ] 00:12:51.786 }' 00:12:51.786 18:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.786 18:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.352 18:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.352 18:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.352 18:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.352 18:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:52.352 18:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.352 18:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:52.352 18:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:52.352 18:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.352 18:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.352 [2024-12-06 18:11:17.753412] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:52.352 18:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.352 18:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:52.352 18:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:52.352 18:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:52.353 18:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:52.353 18:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:52.353 18:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:52.353 18:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.353 18:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.353 18:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.353 18:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.353 18:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.353 18:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.353 18:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.353 18:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.353 18:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.353 18:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.353 "name": "Existed_Raid", 00:12:52.353 "uuid": "1ff94bf9-fbf4-4cec-8a28-3822b9c0bd31", 00:12:52.353 "strip_size_kb": 0, 00:12:52.353 "state": "configuring", 00:12:52.353 "raid_level": "raid1", 00:12:52.353 "superblock": true, 00:12:52.353 "num_base_bdevs": 3, 00:12:52.353 "num_base_bdevs_discovered": 1, 00:12:52.353 "num_base_bdevs_operational": 3, 00:12:52.353 "base_bdevs_list": [ 00:12:52.353 { 00:12:52.353 "name": "BaseBdev1", 00:12:52.353 "uuid": "c2d9d638-5c28-44cd-a815-3c415ac6b6bb", 00:12:52.353 "is_configured": true, 00:12:52.353 "data_offset": 2048, 00:12:52.353 "data_size": 63488 00:12:52.353 }, 00:12:52.353 { 00:12:52.353 "name": null, 00:12:52.353 "uuid": "ab9ea4f1-0fb7-4f21-95e5-580df54cc560", 00:12:52.353 "is_configured": false, 00:12:52.353 "data_offset": 0, 00:12:52.353 "data_size": 63488 00:12:52.353 }, 00:12:52.353 { 00:12:52.353 "name": null, 00:12:52.353 "uuid": "27229303-86d8-484d-be0d-39f8bf2449de", 00:12:52.353 "is_configured": false, 00:12:52.353 "data_offset": 0, 00:12:52.353 "data_size": 63488 00:12:52.353 } 00:12:52.353 ] 00:12:52.353 }' 00:12:52.353 18:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.353 18:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.919 18:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:52.919 18:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.919 18:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.919 18:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.919 18:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.919 18:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:52.919 18:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:52.919 18:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.919 18:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.919 [2024-12-06 18:11:18.325699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:52.919 18:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.919 18:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:52.919 18:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:52.919 18:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:52.919 18:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:52.919 18:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:52.919 18:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:52.919 18:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.919 18:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.919 18:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.919 18:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.919 18:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.919 18:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.919 18:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.919 18:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.920 18:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.920 18:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.920 "name": "Existed_Raid", 00:12:52.920 "uuid": "1ff94bf9-fbf4-4cec-8a28-3822b9c0bd31", 00:12:52.920 "strip_size_kb": 0, 00:12:52.920 "state": "configuring", 00:12:52.920 "raid_level": "raid1", 00:12:52.920 "superblock": true, 00:12:52.920 "num_base_bdevs": 3, 00:12:52.920 "num_base_bdevs_discovered": 2, 00:12:52.920 "num_base_bdevs_operational": 3, 00:12:52.920 "base_bdevs_list": [ 00:12:52.920 { 00:12:52.920 "name": "BaseBdev1", 00:12:52.920 "uuid": "c2d9d638-5c28-44cd-a815-3c415ac6b6bb", 00:12:52.920 "is_configured": true, 00:12:52.920 "data_offset": 2048, 00:12:52.920 "data_size": 63488 00:12:52.920 }, 00:12:52.920 { 00:12:52.920 "name": null, 00:12:52.920 "uuid": "ab9ea4f1-0fb7-4f21-95e5-580df54cc560", 00:12:52.920 "is_configured": false, 00:12:52.920 "data_offset": 0, 00:12:52.920 "data_size": 63488 00:12:52.920 }, 00:12:52.920 { 00:12:52.920 "name": "BaseBdev3", 00:12:52.920 "uuid": "27229303-86d8-484d-be0d-39f8bf2449de", 00:12:52.920 "is_configured": true, 00:12:52.920 "data_offset": 2048, 00:12:52.920 "data_size": 63488 00:12:52.920 } 00:12:52.920 ] 00:12:52.920 }' 00:12:52.920 18:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.920 18:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.485 18:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.486 18:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.486 18:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.486 18:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:53.486 18:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.486 18:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:53.486 18:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:53.486 18:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.486 18:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.486 [2024-12-06 18:11:18.885825] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:53.486 18:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.486 18:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:53.486 18:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:53.486 18:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:53.486 18:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:53.486 18:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:53.486 18:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:53.486 18:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.486 18:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.486 18:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.486 18:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.486 18:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.486 18:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.486 18:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.486 18:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:53.486 18:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.743 18:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.744 "name": "Existed_Raid", 00:12:53.744 "uuid": "1ff94bf9-fbf4-4cec-8a28-3822b9c0bd31", 00:12:53.744 "strip_size_kb": 0, 00:12:53.744 "state": "configuring", 00:12:53.744 "raid_level": "raid1", 00:12:53.744 "superblock": true, 00:12:53.744 "num_base_bdevs": 3, 00:12:53.744 "num_base_bdevs_discovered": 1, 00:12:53.744 "num_base_bdevs_operational": 3, 00:12:53.744 "base_bdevs_list": [ 00:12:53.744 { 00:12:53.744 "name": null, 00:12:53.744 "uuid": "c2d9d638-5c28-44cd-a815-3c415ac6b6bb", 00:12:53.744 "is_configured": false, 00:12:53.744 "data_offset": 0, 00:12:53.744 "data_size": 63488 00:12:53.744 }, 00:12:53.744 { 00:12:53.744 "name": null, 00:12:53.744 "uuid": "ab9ea4f1-0fb7-4f21-95e5-580df54cc560", 00:12:53.744 "is_configured": false, 00:12:53.744 "data_offset": 0, 00:12:53.744 "data_size": 63488 00:12:53.744 }, 00:12:53.744 { 00:12:53.744 "name": "BaseBdev3", 00:12:53.744 "uuid": "27229303-86d8-484d-be0d-39f8bf2449de", 00:12:53.744 "is_configured": true, 00:12:53.744 "data_offset": 2048, 00:12:53.744 "data_size": 63488 00:12:53.744 } 00:12:53.744 ] 00:12:53.744 }' 00:12:53.744 18:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.744 18:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.002 18:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.002 18:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.002 18:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.002 18:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:54.002 18:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.260 18:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:54.260 18:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:54.260 18:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.260 18:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.260 [2024-12-06 18:11:19.536183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:54.260 18:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.260 18:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:54.260 18:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:54.260 18:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:54.260 18:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:54.260 18:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:54.260 18:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:54.260 18:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.260 18:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.260 18:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.260 18:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.260 18:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.260 18:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.260 18:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.261 18:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.261 18:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.261 18:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.261 "name": "Existed_Raid", 00:12:54.261 "uuid": "1ff94bf9-fbf4-4cec-8a28-3822b9c0bd31", 00:12:54.261 "strip_size_kb": 0, 00:12:54.261 "state": "configuring", 00:12:54.261 "raid_level": "raid1", 00:12:54.261 "superblock": true, 00:12:54.261 "num_base_bdevs": 3, 00:12:54.261 "num_base_bdevs_discovered": 2, 00:12:54.261 "num_base_bdevs_operational": 3, 00:12:54.261 "base_bdevs_list": [ 00:12:54.261 { 00:12:54.261 "name": null, 00:12:54.261 "uuid": "c2d9d638-5c28-44cd-a815-3c415ac6b6bb", 00:12:54.261 "is_configured": false, 00:12:54.261 "data_offset": 0, 00:12:54.261 "data_size": 63488 00:12:54.261 }, 00:12:54.261 { 00:12:54.261 "name": "BaseBdev2", 00:12:54.261 "uuid": "ab9ea4f1-0fb7-4f21-95e5-580df54cc560", 00:12:54.261 "is_configured": true, 00:12:54.261 "data_offset": 2048, 00:12:54.261 "data_size": 63488 00:12:54.261 }, 00:12:54.261 { 00:12:54.261 "name": "BaseBdev3", 00:12:54.261 "uuid": "27229303-86d8-484d-be0d-39f8bf2449de", 00:12:54.261 "is_configured": true, 00:12:54.261 "data_offset": 2048, 00:12:54.261 "data_size": 63488 00:12:54.261 } 00:12:54.261 ] 00:12:54.261 }' 00:12:54.261 18:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.261 18:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.827 18:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.827 18:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.827 18:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.827 18:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:54.827 18:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.827 18:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:54.827 18:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:54.827 18:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.827 18:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.827 18:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.827 18:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.827 18:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c2d9d638-5c28-44cd-a815-3c415ac6b6bb 00:12:54.827 18:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.827 18:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.827 [2024-12-06 18:11:20.185997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:54.827 [2024-12-06 18:11:20.186489] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:54.827 [2024-12-06 18:11:20.186514] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:54.827 [2024-12-06 18:11:20.186856] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:54.827 NewBaseBdev 00:12:54.827 [2024-12-06 18:11:20.187033] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:54.827 [2024-12-06 18:11:20.187054] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:54.827 [2024-12-06 18:11:20.187213] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:54.827 18:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.827 18:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:54.827 18:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:54.827 18:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:54.827 18:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:54.827 18:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:54.827 18:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:54.827 18:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:54.828 18:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.828 18:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.828 18:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.828 18:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:54.828 18:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.828 18:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.828 [ 00:12:54.828 { 00:12:54.828 "name": "NewBaseBdev", 00:12:54.828 "aliases": [ 00:12:54.828 "c2d9d638-5c28-44cd-a815-3c415ac6b6bb" 00:12:54.828 ], 00:12:54.828 "product_name": "Malloc disk", 00:12:54.828 "block_size": 512, 00:12:54.828 "num_blocks": 65536, 00:12:54.828 "uuid": "c2d9d638-5c28-44cd-a815-3c415ac6b6bb", 00:12:54.828 "assigned_rate_limits": { 00:12:54.828 "rw_ios_per_sec": 0, 00:12:54.828 "rw_mbytes_per_sec": 0, 00:12:54.828 "r_mbytes_per_sec": 0, 00:12:54.828 "w_mbytes_per_sec": 0 00:12:54.828 }, 00:12:54.828 "claimed": true, 00:12:54.828 "claim_type": "exclusive_write", 00:12:54.828 "zoned": false, 00:12:54.828 "supported_io_types": { 00:12:54.828 "read": true, 00:12:54.828 "write": true, 00:12:54.828 "unmap": true, 00:12:54.828 "flush": true, 00:12:54.828 "reset": true, 00:12:54.828 "nvme_admin": false, 00:12:54.828 "nvme_io": false, 00:12:54.828 "nvme_io_md": false, 00:12:54.828 "write_zeroes": true, 00:12:54.828 "zcopy": true, 00:12:54.828 "get_zone_info": false, 00:12:54.828 "zone_management": false, 00:12:54.828 "zone_append": false, 00:12:54.828 "compare": false, 00:12:54.828 "compare_and_write": false, 00:12:54.828 "abort": true, 00:12:54.828 "seek_hole": false, 00:12:54.828 "seek_data": false, 00:12:54.828 "copy": true, 00:12:54.828 "nvme_iov_md": false 00:12:54.828 }, 00:12:54.828 "memory_domains": [ 00:12:54.828 { 00:12:54.828 "dma_device_id": "system", 00:12:54.828 "dma_device_type": 1 00:12:54.828 }, 00:12:54.828 { 00:12:54.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.828 "dma_device_type": 2 00:12:54.828 } 00:12:54.828 ], 00:12:54.828 "driver_specific": {} 00:12:54.828 } 00:12:54.828 ] 00:12:54.828 18:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.828 18:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:54.828 18:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:54.828 18:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:54.828 18:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:54.828 18:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:54.828 18:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:54.828 18:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:54.828 18:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.828 18:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.828 18:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.828 18:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.828 18:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.828 18:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.828 18:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.828 18:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.828 18:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.828 18:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.828 "name": "Existed_Raid", 00:12:54.828 "uuid": "1ff94bf9-fbf4-4cec-8a28-3822b9c0bd31", 00:12:54.828 "strip_size_kb": 0, 00:12:54.828 "state": "online", 00:12:54.828 "raid_level": "raid1", 00:12:54.828 "superblock": true, 00:12:54.828 "num_base_bdevs": 3, 00:12:54.828 "num_base_bdevs_discovered": 3, 00:12:54.828 "num_base_bdevs_operational": 3, 00:12:54.828 "base_bdevs_list": [ 00:12:54.828 { 00:12:54.828 "name": "NewBaseBdev", 00:12:54.828 "uuid": "c2d9d638-5c28-44cd-a815-3c415ac6b6bb", 00:12:54.828 "is_configured": true, 00:12:54.828 "data_offset": 2048, 00:12:54.828 "data_size": 63488 00:12:54.828 }, 00:12:54.828 { 00:12:54.828 "name": "BaseBdev2", 00:12:54.828 "uuid": "ab9ea4f1-0fb7-4f21-95e5-580df54cc560", 00:12:54.828 "is_configured": true, 00:12:54.828 "data_offset": 2048, 00:12:54.828 "data_size": 63488 00:12:54.828 }, 00:12:54.828 { 00:12:54.828 "name": "BaseBdev3", 00:12:54.828 "uuid": "27229303-86d8-484d-be0d-39f8bf2449de", 00:12:54.828 "is_configured": true, 00:12:54.828 "data_offset": 2048, 00:12:54.828 "data_size": 63488 00:12:54.828 } 00:12:54.828 ] 00:12:54.828 }' 00:12:54.828 18:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.828 18:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.396 18:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:55.396 18:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:55.396 18:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:55.396 18:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:55.396 18:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:55.396 18:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:55.396 18:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:55.396 18:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:55.396 18:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.396 18:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.396 [2024-12-06 18:11:20.718553] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:55.396 18:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.396 18:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:55.396 "name": "Existed_Raid", 00:12:55.396 "aliases": [ 00:12:55.396 "1ff94bf9-fbf4-4cec-8a28-3822b9c0bd31" 00:12:55.396 ], 00:12:55.396 "product_name": "Raid Volume", 00:12:55.396 "block_size": 512, 00:12:55.396 "num_blocks": 63488, 00:12:55.396 "uuid": "1ff94bf9-fbf4-4cec-8a28-3822b9c0bd31", 00:12:55.396 "assigned_rate_limits": { 00:12:55.396 "rw_ios_per_sec": 0, 00:12:55.396 "rw_mbytes_per_sec": 0, 00:12:55.396 "r_mbytes_per_sec": 0, 00:12:55.397 "w_mbytes_per_sec": 0 00:12:55.397 }, 00:12:55.397 "claimed": false, 00:12:55.397 "zoned": false, 00:12:55.397 "supported_io_types": { 00:12:55.397 "read": true, 00:12:55.397 "write": true, 00:12:55.397 "unmap": false, 00:12:55.397 "flush": false, 00:12:55.397 "reset": true, 00:12:55.397 "nvme_admin": false, 00:12:55.397 "nvme_io": false, 00:12:55.397 "nvme_io_md": false, 00:12:55.397 "write_zeroes": true, 00:12:55.397 "zcopy": false, 00:12:55.397 "get_zone_info": false, 00:12:55.397 "zone_management": false, 00:12:55.397 "zone_append": false, 00:12:55.397 "compare": false, 00:12:55.397 "compare_and_write": false, 00:12:55.397 "abort": false, 00:12:55.397 "seek_hole": false, 00:12:55.397 "seek_data": false, 00:12:55.397 "copy": false, 00:12:55.397 "nvme_iov_md": false 00:12:55.397 }, 00:12:55.397 "memory_domains": [ 00:12:55.397 { 00:12:55.397 "dma_device_id": "system", 00:12:55.397 "dma_device_type": 1 00:12:55.397 }, 00:12:55.397 { 00:12:55.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.397 "dma_device_type": 2 00:12:55.397 }, 00:12:55.397 { 00:12:55.397 "dma_device_id": "system", 00:12:55.397 "dma_device_type": 1 00:12:55.397 }, 00:12:55.397 { 00:12:55.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.397 "dma_device_type": 2 00:12:55.397 }, 00:12:55.397 { 00:12:55.397 "dma_device_id": "system", 00:12:55.397 "dma_device_type": 1 00:12:55.397 }, 00:12:55.397 { 00:12:55.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.397 "dma_device_type": 2 00:12:55.397 } 00:12:55.397 ], 00:12:55.397 "driver_specific": { 00:12:55.397 "raid": { 00:12:55.397 "uuid": "1ff94bf9-fbf4-4cec-8a28-3822b9c0bd31", 00:12:55.397 "strip_size_kb": 0, 00:12:55.397 "state": "online", 00:12:55.397 "raid_level": "raid1", 00:12:55.397 "superblock": true, 00:12:55.397 "num_base_bdevs": 3, 00:12:55.397 "num_base_bdevs_discovered": 3, 00:12:55.397 "num_base_bdevs_operational": 3, 00:12:55.397 "base_bdevs_list": [ 00:12:55.397 { 00:12:55.397 "name": "NewBaseBdev", 00:12:55.397 "uuid": "c2d9d638-5c28-44cd-a815-3c415ac6b6bb", 00:12:55.397 "is_configured": true, 00:12:55.397 "data_offset": 2048, 00:12:55.397 "data_size": 63488 00:12:55.397 }, 00:12:55.397 { 00:12:55.397 "name": "BaseBdev2", 00:12:55.397 "uuid": "ab9ea4f1-0fb7-4f21-95e5-580df54cc560", 00:12:55.397 "is_configured": true, 00:12:55.397 "data_offset": 2048, 00:12:55.397 "data_size": 63488 00:12:55.397 }, 00:12:55.397 { 00:12:55.397 "name": "BaseBdev3", 00:12:55.397 "uuid": "27229303-86d8-484d-be0d-39f8bf2449de", 00:12:55.397 "is_configured": true, 00:12:55.397 "data_offset": 2048, 00:12:55.397 "data_size": 63488 00:12:55.397 } 00:12:55.397 ] 00:12:55.397 } 00:12:55.397 } 00:12:55.397 }' 00:12:55.397 18:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:55.397 18:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:55.397 BaseBdev2 00:12:55.397 BaseBdev3' 00:12:55.397 18:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.397 18:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:55.397 18:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:55.397 18:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:55.397 18:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.397 18:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.397 18:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.397 18:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.656 18:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:55.656 18:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:55.656 18:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:55.656 18:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:55.656 18:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.656 18:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.656 18:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.656 18:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.656 18:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:55.656 18:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:55.656 18:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:55.656 18:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:55.656 18:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.656 18:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.656 18:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.656 18:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.656 18:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:55.656 18:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:55.656 18:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:55.656 18:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.656 18:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.656 [2024-12-06 18:11:21.030251] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:55.656 [2024-12-06 18:11:21.030411] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:55.656 [2024-12-06 18:11:21.030594] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:55.656 [2024-12-06 18:11:21.031126] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:55.656 [2024-12-06 18:11:21.031154] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:55.656 18:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.656 18:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68140 00:12:55.656 18:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68140 ']' 00:12:55.656 18:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68140 00:12:55.656 18:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:55.656 18:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:55.656 18:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68140 00:12:55.656 killing process with pid 68140 00:12:55.656 18:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:55.656 18:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:55.656 18:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68140' 00:12:55.656 18:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68140 00:12:55.656 [2024-12-06 18:11:21.077335] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:55.656 18:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68140 00:12:55.915 [2024-12-06 18:11:21.340589] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:57.288 18:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:57.288 00:12:57.288 real 0m11.647s 00:12:57.288 user 0m19.387s 00:12:57.288 sys 0m1.538s 00:12:57.288 18:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:57.288 18:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.288 ************************************ 00:12:57.288 END TEST raid_state_function_test_sb 00:12:57.288 ************************************ 00:12:57.288 18:11:22 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:12:57.288 18:11:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:57.288 18:11:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:57.288 18:11:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:57.288 ************************************ 00:12:57.288 START TEST raid_superblock_test 00:12:57.288 ************************************ 00:12:57.288 18:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:12:57.288 18:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:12:57.288 18:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:12:57.288 18:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:57.288 18:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:57.288 18:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:57.288 18:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:57.288 18:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:57.288 18:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:57.288 18:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:57.288 18:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:57.288 18:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:57.288 18:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:57.288 18:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:57.288 18:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:12:57.288 18:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:12:57.288 18:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68772 00:12:57.289 18:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:57.289 18:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68772 00:12:57.289 18:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68772 ']' 00:12:57.289 18:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.289 18:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:57.289 18:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.289 18:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:57.289 18:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.289 [2024-12-06 18:11:22.549996] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:12:57.289 [2024-12-06 18:11:22.550197] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68772 ] 00:12:57.289 [2024-12-06 18:11:22.745907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.547 [2024-12-06 18:11:22.898203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.811 [2024-12-06 18:11:23.110406] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:57.811 [2024-12-06 18:11:23.110454] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:58.079 18:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:58.079 18:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:58.079 18:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:58.079 18:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:58.080 18:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:58.080 18:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:58.080 18:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:58.080 18:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:58.080 18:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:58.080 18:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:58.080 18:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:58.080 18:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.080 18:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.080 malloc1 00:12:58.080 18:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.080 18:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:58.080 18:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.080 18:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.080 [2024-12-06 18:11:23.585693] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:58.080 [2024-12-06 18:11:23.585920] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.080 [2024-12-06 18:11:23.586078] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:58.080 [2024-12-06 18:11:23.586195] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.080 [2024-12-06 18:11:23.589601] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.080 [2024-12-06 18:11:23.589797] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:58.080 pt1 00:12:58.080 18:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.080 18:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:58.080 18:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:58.080 18:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:58.080 18:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:58.080 18:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:58.080 18:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:58.080 18:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:58.080 18:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:58.080 18:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:58.080 18:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.080 18:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.339 malloc2 00:12:58.339 18:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.339 18:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:58.339 18:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.339 18:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.339 [2024-12-06 18:11:23.638567] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:58.339 [2024-12-06 18:11:23.638633] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.339 [2024-12-06 18:11:23.638699] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:58.339 [2024-12-06 18:11:23.638715] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.339 [2024-12-06 18:11:23.641511] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.339 [2024-12-06 18:11:23.641557] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:58.339 pt2 00:12:58.339 18:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.339 18:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:58.339 18:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:58.339 18:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:58.339 18:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:58.339 18:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:58.339 18:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:58.339 18:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:58.339 18:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:58.339 18:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:58.339 18:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.339 18:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.339 malloc3 00:12:58.339 18:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.339 18:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:58.339 18:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.339 18:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.339 [2024-12-06 18:11:23.706326] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:58.339 [2024-12-06 18:11:23.706513] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.340 [2024-12-06 18:11:23.706557] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:58.340 [2024-12-06 18:11:23.706575] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.340 [2024-12-06 18:11:23.709314] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.340 pt3 00:12:58.340 [2024-12-06 18:11:23.709466] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:58.340 18:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.340 18:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:58.340 18:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:58.340 18:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:12:58.340 18:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.340 18:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.340 [2024-12-06 18:11:23.714382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:58.340 [2024-12-06 18:11:23.716813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:58.340 [2024-12-06 18:11:23.716921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:58.340 [2024-12-06 18:11:23.717163] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:58.340 [2024-12-06 18:11:23.717191] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:58.340 [2024-12-06 18:11:23.717489] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:58.340 [2024-12-06 18:11:23.717715] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:58.340 [2024-12-06 18:11:23.717735] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:58.340 [2024-12-06 18:11:23.717931] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:58.340 18:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.340 18:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:58.340 18:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:58.340 18:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:58.340 18:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:58.340 18:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:58.340 18:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:58.340 18:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.340 18:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.340 18:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.340 18:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.340 18:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.340 18:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.340 18:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.340 18:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.340 18:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.340 18:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.340 "name": "raid_bdev1", 00:12:58.340 "uuid": "86fe3281-9245-421f-8d6f-48d564aafc10", 00:12:58.340 "strip_size_kb": 0, 00:12:58.340 "state": "online", 00:12:58.340 "raid_level": "raid1", 00:12:58.340 "superblock": true, 00:12:58.340 "num_base_bdevs": 3, 00:12:58.340 "num_base_bdevs_discovered": 3, 00:12:58.340 "num_base_bdevs_operational": 3, 00:12:58.340 "base_bdevs_list": [ 00:12:58.340 { 00:12:58.340 "name": "pt1", 00:12:58.340 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:58.340 "is_configured": true, 00:12:58.340 "data_offset": 2048, 00:12:58.340 "data_size": 63488 00:12:58.340 }, 00:12:58.340 { 00:12:58.340 "name": "pt2", 00:12:58.340 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:58.340 "is_configured": true, 00:12:58.340 "data_offset": 2048, 00:12:58.340 "data_size": 63488 00:12:58.340 }, 00:12:58.340 { 00:12:58.340 "name": "pt3", 00:12:58.340 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:58.340 "is_configured": true, 00:12:58.340 "data_offset": 2048, 00:12:58.340 "data_size": 63488 00:12:58.340 } 00:12:58.340 ] 00:12:58.340 }' 00:12:58.340 18:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.340 18:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.908 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:58.908 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:58.908 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:58.908 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:58.908 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:58.908 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:58.908 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:58.908 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:58.908 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.908 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.908 [2024-12-06 18:11:24.218904] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:58.909 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.909 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:58.909 "name": "raid_bdev1", 00:12:58.909 "aliases": [ 00:12:58.909 "86fe3281-9245-421f-8d6f-48d564aafc10" 00:12:58.909 ], 00:12:58.909 "product_name": "Raid Volume", 00:12:58.909 "block_size": 512, 00:12:58.909 "num_blocks": 63488, 00:12:58.909 "uuid": "86fe3281-9245-421f-8d6f-48d564aafc10", 00:12:58.909 "assigned_rate_limits": { 00:12:58.909 "rw_ios_per_sec": 0, 00:12:58.909 "rw_mbytes_per_sec": 0, 00:12:58.909 "r_mbytes_per_sec": 0, 00:12:58.909 "w_mbytes_per_sec": 0 00:12:58.909 }, 00:12:58.909 "claimed": false, 00:12:58.909 "zoned": false, 00:12:58.909 "supported_io_types": { 00:12:58.909 "read": true, 00:12:58.909 "write": true, 00:12:58.909 "unmap": false, 00:12:58.909 "flush": false, 00:12:58.909 "reset": true, 00:12:58.909 "nvme_admin": false, 00:12:58.909 "nvme_io": false, 00:12:58.909 "nvme_io_md": false, 00:12:58.909 "write_zeroes": true, 00:12:58.909 "zcopy": false, 00:12:58.909 "get_zone_info": false, 00:12:58.909 "zone_management": false, 00:12:58.909 "zone_append": false, 00:12:58.909 "compare": false, 00:12:58.909 "compare_and_write": false, 00:12:58.909 "abort": false, 00:12:58.909 "seek_hole": false, 00:12:58.909 "seek_data": false, 00:12:58.909 "copy": false, 00:12:58.909 "nvme_iov_md": false 00:12:58.909 }, 00:12:58.909 "memory_domains": [ 00:12:58.909 { 00:12:58.909 "dma_device_id": "system", 00:12:58.909 "dma_device_type": 1 00:12:58.909 }, 00:12:58.909 { 00:12:58.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.909 "dma_device_type": 2 00:12:58.909 }, 00:12:58.909 { 00:12:58.909 "dma_device_id": "system", 00:12:58.909 "dma_device_type": 1 00:12:58.909 }, 00:12:58.909 { 00:12:58.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.909 "dma_device_type": 2 00:12:58.909 }, 00:12:58.909 { 00:12:58.909 "dma_device_id": "system", 00:12:58.909 "dma_device_type": 1 00:12:58.909 }, 00:12:58.909 { 00:12:58.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.909 "dma_device_type": 2 00:12:58.909 } 00:12:58.909 ], 00:12:58.909 "driver_specific": { 00:12:58.909 "raid": { 00:12:58.909 "uuid": "86fe3281-9245-421f-8d6f-48d564aafc10", 00:12:58.909 "strip_size_kb": 0, 00:12:58.909 "state": "online", 00:12:58.909 "raid_level": "raid1", 00:12:58.909 "superblock": true, 00:12:58.909 "num_base_bdevs": 3, 00:12:58.909 "num_base_bdevs_discovered": 3, 00:12:58.909 "num_base_bdevs_operational": 3, 00:12:58.909 "base_bdevs_list": [ 00:12:58.909 { 00:12:58.909 "name": "pt1", 00:12:58.909 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:58.909 "is_configured": true, 00:12:58.909 "data_offset": 2048, 00:12:58.909 "data_size": 63488 00:12:58.909 }, 00:12:58.909 { 00:12:58.909 "name": "pt2", 00:12:58.909 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:58.909 "is_configured": true, 00:12:58.909 "data_offset": 2048, 00:12:58.909 "data_size": 63488 00:12:58.909 }, 00:12:58.909 { 00:12:58.909 "name": "pt3", 00:12:58.909 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:58.909 "is_configured": true, 00:12:58.909 "data_offset": 2048, 00:12:58.909 "data_size": 63488 00:12:58.909 } 00:12:58.909 ] 00:12:58.909 } 00:12:58.909 } 00:12:58.909 }' 00:12:58.909 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:58.909 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:58.909 pt2 00:12:58.909 pt3' 00:12:58.909 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:58.909 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:58.909 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:58.909 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:58.909 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.909 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:58.909 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.909 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.909 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:58.909 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:58.909 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:58.909 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:58.909 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:59.168 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.168 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.168 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.168 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:59.168 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:59.168 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:59.168 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:59.169 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.169 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.169 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.169 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.169 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:59.169 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:59.169 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:59.169 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:59.169 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.169 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.169 [2024-12-06 18:11:24.534928] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:59.169 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.169 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=86fe3281-9245-421f-8d6f-48d564aafc10 00:12:59.169 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 86fe3281-9245-421f-8d6f-48d564aafc10 ']' 00:12:59.169 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:59.169 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.169 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.169 [2024-12-06 18:11:24.582611] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:59.169 [2024-12-06 18:11:24.582648] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:59.169 [2024-12-06 18:11:24.582783] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:59.169 [2024-12-06 18:11:24.582886] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:59.169 [2024-12-06 18:11:24.582903] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:59.169 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.169 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.169 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:59.169 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.169 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.169 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.169 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:59.169 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:59.169 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:59.169 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:59.169 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.169 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.169 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.169 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:59.169 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:59.169 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.169 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.169 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.169 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:59.169 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:59.169 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.169 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.169 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.169 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:59.169 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:59.169 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.169 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.429 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.429 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:59.429 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:59.429 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:59.429 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:59.429 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:59.429 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:59.429 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:59.429 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:59.429 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:59.429 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.429 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.429 [2024-12-06 18:11:24.722679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:59.429 [2024-12-06 18:11:24.725075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:59.429 [2024-12-06 18:11:24.725156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:59.429 [2024-12-06 18:11:24.725227] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:59.429 [2024-12-06 18:11:24.725301] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:59.429 [2024-12-06 18:11:24.725334] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:59.429 [2024-12-06 18:11:24.725361] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:59.429 [2024-12-06 18:11:24.725374] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:59.429 request: 00:12:59.429 { 00:12:59.429 "name": "raid_bdev1", 00:12:59.429 "raid_level": "raid1", 00:12:59.429 "base_bdevs": [ 00:12:59.429 "malloc1", 00:12:59.429 "malloc2", 00:12:59.429 "malloc3" 00:12:59.429 ], 00:12:59.429 "superblock": false, 00:12:59.429 "method": "bdev_raid_create", 00:12:59.429 "req_id": 1 00:12:59.429 } 00:12:59.429 Got JSON-RPC error response 00:12:59.429 response: 00:12:59.429 { 00:12:59.429 "code": -17, 00:12:59.429 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:59.429 } 00:12:59.429 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:59.429 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:59.429 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:59.429 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:59.429 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:59.429 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.429 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:59.429 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.429 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.429 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.429 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:59.429 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:59.429 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:59.429 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.429 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.429 [2024-12-06 18:11:24.790662] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:59.429 [2024-12-06 18:11:24.790896] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.429 [2024-12-06 18:11:24.790974] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:59.429 [2024-12-06 18:11:24.791194] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.429 [2024-12-06 18:11:24.794066] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.429 [2024-12-06 18:11:24.794158] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:59.429 [2024-12-06 18:11:24.794387] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:59.429 [2024-12-06 18:11:24.794567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:59.429 pt1 00:12:59.429 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.429 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:59.429 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.429 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:59.429 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.429 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.429 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:59.429 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.429 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.429 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.429 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.429 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.429 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.429 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.429 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.429 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.429 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.429 "name": "raid_bdev1", 00:12:59.429 "uuid": "86fe3281-9245-421f-8d6f-48d564aafc10", 00:12:59.429 "strip_size_kb": 0, 00:12:59.429 "state": "configuring", 00:12:59.429 "raid_level": "raid1", 00:12:59.429 "superblock": true, 00:12:59.429 "num_base_bdevs": 3, 00:12:59.430 "num_base_bdevs_discovered": 1, 00:12:59.430 "num_base_bdevs_operational": 3, 00:12:59.430 "base_bdevs_list": [ 00:12:59.430 { 00:12:59.430 "name": "pt1", 00:12:59.430 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:59.430 "is_configured": true, 00:12:59.430 "data_offset": 2048, 00:12:59.430 "data_size": 63488 00:12:59.430 }, 00:12:59.430 { 00:12:59.430 "name": null, 00:12:59.430 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:59.430 "is_configured": false, 00:12:59.430 "data_offset": 2048, 00:12:59.430 "data_size": 63488 00:12:59.430 }, 00:12:59.430 { 00:12:59.430 "name": null, 00:12:59.430 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:59.430 "is_configured": false, 00:12:59.430 "data_offset": 2048, 00:12:59.430 "data_size": 63488 00:12:59.430 } 00:12:59.430 ] 00:12:59.430 }' 00:12:59.430 18:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.430 18:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.009 18:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:13:00.009 18:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:00.009 18:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.009 18:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.009 [2024-12-06 18:11:25.295033] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:00.009 [2024-12-06 18:11:25.295282] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.009 [2024-12-06 18:11:25.295327] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:00.009 [2024-12-06 18:11:25.295343] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.009 [2024-12-06 18:11:25.295952] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.009 [2024-12-06 18:11:25.295983] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:00.009 [2024-12-06 18:11:25.296101] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:00.009 [2024-12-06 18:11:25.296141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:00.009 pt2 00:13:00.009 18:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.009 18:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:00.009 18:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.009 18:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.009 [2024-12-06 18:11:25.303002] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:00.009 18:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.009 18:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:00.009 18:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:00.009 18:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:00.009 18:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:00.009 18:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:00.009 18:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:00.009 18:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.009 18:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.009 18:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.009 18:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.010 18:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.010 18:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.010 18:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.010 18:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.010 18:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.010 18:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.010 "name": "raid_bdev1", 00:13:00.010 "uuid": "86fe3281-9245-421f-8d6f-48d564aafc10", 00:13:00.010 "strip_size_kb": 0, 00:13:00.010 "state": "configuring", 00:13:00.010 "raid_level": "raid1", 00:13:00.010 "superblock": true, 00:13:00.010 "num_base_bdevs": 3, 00:13:00.010 "num_base_bdevs_discovered": 1, 00:13:00.010 "num_base_bdevs_operational": 3, 00:13:00.010 "base_bdevs_list": [ 00:13:00.010 { 00:13:00.010 "name": "pt1", 00:13:00.010 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:00.010 "is_configured": true, 00:13:00.010 "data_offset": 2048, 00:13:00.010 "data_size": 63488 00:13:00.010 }, 00:13:00.010 { 00:13:00.010 "name": null, 00:13:00.010 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:00.010 "is_configured": false, 00:13:00.010 "data_offset": 0, 00:13:00.010 "data_size": 63488 00:13:00.010 }, 00:13:00.010 { 00:13:00.010 "name": null, 00:13:00.010 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:00.010 "is_configured": false, 00:13:00.010 "data_offset": 2048, 00:13:00.010 "data_size": 63488 00:13:00.010 } 00:13:00.010 ] 00:13:00.010 }' 00:13:00.010 18:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.010 18:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.577 18:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:00.578 18:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:00.578 18:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:00.578 18:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.578 18:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.578 [2024-12-06 18:11:25.843160] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:00.578 [2024-12-06 18:11:25.843386] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.578 [2024-12-06 18:11:25.843460] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:13:00.578 [2024-12-06 18:11:25.843590] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.578 [2024-12-06 18:11:25.844201] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.578 [2024-12-06 18:11:25.844232] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:00.578 [2024-12-06 18:11:25.844334] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:00.578 [2024-12-06 18:11:25.844382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:00.578 pt2 00:13:00.578 18:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.578 18:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:00.578 18:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:00.578 18:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:00.578 18:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.578 18:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.578 [2024-12-06 18:11:25.851118] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:00.578 [2024-12-06 18:11:25.851300] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.578 [2024-12-06 18:11:25.851365] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:00.578 [2024-12-06 18:11:25.851490] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.578 [2024-12-06 18:11:25.852009] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.578 [2024-12-06 18:11:25.852173] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:00.578 [2024-12-06 18:11:25.852365] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:00.578 [2024-12-06 18:11:25.852545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:00.578 [2024-12-06 18:11:25.852749] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:00.578 [2024-12-06 18:11:25.852890] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:00.578 [2024-12-06 18:11:25.853316] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:00.578 [2024-12-06 18:11:25.853639] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:00.578 [2024-12-06 18:11:25.853662] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:00.578 [2024-12-06 18:11:25.853858] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:00.578 pt3 00:13:00.578 18:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.578 18:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:00.578 18:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:00.578 18:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:00.578 18:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:00.578 18:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:00.578 18:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:00.578 18:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:00.578 18:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:00.578 18:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.578 18:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.578 18:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.578 18:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.578 18:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.578 18:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.578 18:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.578 18:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.578 18:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.578 18:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.578 "name": "raid_bdev1", 00:13:00.578 "uuid": "86fe3281-9245-421f-8d6f-48d564aafc10", 00:13:00.578 "strip_size_kb": 0, 00:13:00.578 "state": "online", 00:13:00.578 "raid_level": "raid1", 00:13:00.578 "superblock": true, 00:13:00.578 "num_base_bdevs": 3, 00:13:00.578 "num_base_bdevs_discovered": 3, 00:13:00.578 "num_base_bdevs_operational": 3, 00:13:00.578 "base_bdevs_list": [ 00:13:00.578 { 00:13:00.578 "name": "pt1", 00:13:00.578 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:00.578 "is_configured": true, 00:13:00.578 "data_offset": 2048, 00:13:00.578 "data_size": 63488 00:13:00.578 }, 00:13:00.578 { 00:13:00.578 "name": "pt2", 00:13:00.578 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:00.578 "is_configured": true, 00:13:00.578 "data_offset": 2048, 00:13:00.578 "data_size": 63488 00:13:00.578 }, 00:13:00.578 { 00:13:00.578 "name": "pt3", 00:13:00.578 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:00.578 "is_configured": true, 00:13:00.578 "data_offset": 2048, 00:13:00.578 "data_size": 63488 00:13:00.578 } 00:13:00.578 ] 00:13:00.578 }' 00:13:00.578 18:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.578 18:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.835 18:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:00.835 18:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:00.835 18:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:00.835 18:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:00.835 18:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:00.835 18:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:00.835 18:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:00.835 18:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:00.835 18:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.835 18:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.835 [2024-12-06 18:11:26.351673] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:01.093 18:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.093 18:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:01.093 "name": "raid_bdev1", 00:13:01.093 "aliases": [ 00:13:01.093 "86fe3281-9245-421f-8d6f-48d564aafc10" 00:13:01.093 ], 00:13:01.093 "product_name": "Raid Volume", 00:13:01.093 "block_size": 512, 00:13:01.093 "num_blocks": 63488, 00:13:01.093 "uuid": "86fe3281-9245-421f-8d6f-48d564aafc10", 00:13:01.093 "assigned_rate_limits": { 00:13:01.093 "rw_ios_per_sec": 0, 00:13:01.093 "rw_mbytes_per_sec": 0, 00:13:01.093 "r_mbytes_per_sec": 0, 00:13:01.093 "w_mbytes_per_sec": 0 00:13:01.093 }, 00:13:01.093 "claimed": false, 00:13:01.093 "zoned": false, 00:13:01.093 "supported_io_types": { 00:13:01.093 "read": true, 00:13:01.093 "write": true, 00:13:01.093 "unmap": false, 00:13:01.093 "flush": false, 00:13:01.093 "reset": true, 00:13:01.093 "nvme_admin": false, 00:13:01.093 "nvme_io": false, 00:13:01.093 "nvme_io_md": false, 00:13:01.093 "write_zeroes": true, 00:13:01.093 "zcopy": false, 00:13:01.093 "get_zone_info": false, 00:13:01.093 "zone_management": false, 00:13:01.093 "zone_append": false, 00:13:01.094 "compare": false, 00:13:01.094 "compare_and_write": false, 00:13:01.094 "abort": false, 00:13:01.094 "seek_hole": false, 00:13:01.094 "seek_data": false, 00:13:01.094 "copy": false, 00:13:01.094 "nvme_iov_md": false 00:13:01.094 }, 00:13:01.094 "memory_domains": [ 00:13:01.094 { 00:13:01.094 "dma_device_id": "system", 00:13:01.094 "dma_device_type": 1 00:13:01.094 }, 00:13:01.094 { 00:13:01.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.094 "dma_device_type": 2 00:13:01.094 }, 00:13:01.094 { 00:13:01.094 "dma_device_id": "system", 00:13:01.094 "dma_device_type": 1 00:13:01.094 }, 00:13:01.094 { 00:13:01.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.094 "dma_device_type": 2 00:13:01.094 }, 00:13:01.094 { 00:13:01.094 "dma_device_id": "system", 00:13:01.094 "dma_device_type": 1 00:13:01.094 }, 00:13:01.094 { 00:13:01.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.094 "dma_device_type": 2 00:13:01.094 } 00:13:01.094 ], 00:13:01.094 "driver_specific": { 00:13:01.094 "raid": { 00:13:01.094 "uuid": "86fe3281-9245-421f-8d6f-48d564aafc10", 00:13:01.094 "strip_size_kb": 0, 00:13:01.094 "state": "online", 00:13:01.094 "raid_level": "raid1", 00:13:01.094 "superblock": true, 00:13:01.094 "num_base_bdevs": 3, 00:13:01.094 "num_base_bdevs_discovered": 3, 00:13:01.094 "num_base_bdevs_operational": 3, 00:13:01.094 "base_bdevs_list": [ 00:13:01.094 { 00:13:01.094 "name": "pt1", 00:13:01.094 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:01.094 "is_configured": true, 00:13:01.094 "data_offset": 2048, 00:13:01.094 "data_size": 63488 00:13:01.094 }, 00:13:01.094 { 00:13:01.094 "name": "pt2", 00:13:01.094 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:01.094 "is_configured": true, 00:13:01.094 "data_offset": 2048, 00:13:01.094 "data_size": 63488 00:13:01.094 }, 00:13:01.094 { 00:13:01.094 "name": "pt3", 00:13:01.094 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:01.094 "is_configured": true, 00:13:01.094 "data_offset": 2048, 00:13:01.094 "data_size": 63488 00:13:01.094 } 00:13:01.094 ] 00:13:01.094 } 00:13:01.094 } 00:13:01.094 }' 00:13:01.094 18:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:01.094 18:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:01.094 pt2 00:13:01.094 pt3' 00:13:01.094 18:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:01.094 18:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:01.094 18:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:01.094 18:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:01.094 18:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.094 18:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.094 18:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:01.094 18:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.094 18:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:01.094 18:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:01.094 18:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:01.094 18:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:01.094 18:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.094 18:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.094 18:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:01.094 18:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.094 18:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:01.094 18:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:01.094 18:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:01.094 18:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:01.094 18:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.094 18:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.094 18:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:01.352 18:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.352 18:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:01.352 18:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:01.352 18:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:01.352 18:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:01.352 18:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.352 18:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.352 [2024-12-06 18:11:26.663704] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:01.352 18:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.352 18:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 86fe3281-9245-421f-8d6f-48d564aafc10 '!=' 86fe3281-9245-421f-8d6f-48d564aafc10 ']' 00:13:01.352 18:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:13:01.352 18:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:01.352 18:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:01.352 18:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:13:01.352 18:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.352 18:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.352 [2024-12-06 18:11:26.719424] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:01.352 18:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.352 18:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:01.352 18:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.352 18:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.352 18:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:01.352 18:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:01.352 18:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:01.352 18:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.352 18:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.352 18:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.352 18:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.352 18:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.353 18:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.353 18:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.353 18:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.353 18:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.353 18:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.353 "name": "raid_bdev1", 00:13:01.353 "uuid": "86fe3281-9245-421f-8d6f-48d564aafc10", 00:13:01.353 "strip_size_kb": 0, 00:13:01.353 "state": "online", 00:13:01.353 "raid_level": "raid1", 00:13:01.353 "superblock": true, 00:13:01.353 "num_base_bdevs": 3, 00:13:01.353 "num_base_bdevs_discovered": 2, 00:13:01.353 "num_base_bdevs_operational": 2, 00:13:01.353 "base_bdevs_list": [ 00:13:01.353 { 00:13:01.353 "name": null, 00:13:01.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.353 "is_configured": false, 00:13:01.353 "data_offset": 0, 00:13:01.353 "data_size": 63488 00:13:01.353 }, 00:13:01.353 { 00:13:01.353 "name": "pt2", 00:13:01.353 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:01.353 "is_configured": true, 00:13:01.353 "data_offset": 2048, 00:13:01.353 "data_size": 63488 00:13:01.353 }, 00:13:01.353 { 00:13:01.353 "name": "pt3", 00:13:01.353 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:01.353 "is_configured": true, 00:13:01.353 "data_offset": 2048, 00:13:01.353 "data_size": 63488 00:13:01.353 } 00:13:01.353 ] 00:13:01.353 }' 00:13:01.353 18:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.353 18:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.920 18:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:01.920 18:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.920 18:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.920 [2024-12-06 18:11:27.239550] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:01.920 [2024-12-06 18:11:27.239712] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:01.920 [2024-12-06 18:11:27.239846] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:01.920 [2024-12-06 18:11:27.239928] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:01.920 [2024-12-06 18:11:27.239951] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:01.920 18:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.920 18:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.920 18:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.920 18:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.920 18:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:13:01.920 18:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.920 18:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:13:01.920 18:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:13:01.920 18:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:13:01.920 18:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:01.920 18:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:13:01.920 18:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.920 18:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.920 18:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.920 18:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:01.920 18:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:01.920 18:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:13:01.920 18:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.920 18:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.920 18:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.920 18:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:01.920 18:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:01.920 18:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:13:01.920 18:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:01.920 18:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:01.920 18:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.920 18:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.920 [2024-12-06 18:11:27.327556] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:01.920 [2024-12-06 18:11:27.327748] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.920 [2024-12-06 18:11:27.327944] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:01.920 [2024-12-06 18:11:27.328079] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.920 [2024-12-06 18:11:27.330914] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.920 [2024-12-06 18:11:27.330966] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:01.920 [2024-12-06 18:11:27.331061] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:01.920 [2024-12-06 18:11:27.331125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:01.920 pt2 00:13:01.920 18:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.920 18:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:13:01.920 18:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.920 18:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:01.920 18:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:01.920 18:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:01.920 18:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:01.920 18:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.920 18:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.920 18:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.920 18:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.920 18:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.920 18:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.920 18:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.920 18:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.920 18:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.920 18:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.920 "name": "raid_bdev1", 00:13:01.920 "uuid": "86fe3281-9245-421f-8d6f-48d564aafc10", 00:13:01.920 "strip_size_kb": 0, 00:13:01.920 "state": "configuring", 00:13:01.920 "raid_level": "raid1", 00:13:01.920 "superblock": true, 00:13:01.920 "num_base_bdevs": 3, 00:13:01.920 "num_base_bdevs_discovered": 1, 00:13:01.920 "num_base_bdevs_operational": 2, 00:13:01.920 "base_bdevs_list": [ 00:13:01.920 { 00:13:01.920 "name": null, 00:13:01.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.920 "is_configured": false, 00:13:01.920 "data_offset": 2048, 00:13:01.920 "data_size": 63488 00:13:01.920 }, 00:13:01.920 { 00:13:01.920 "name": "pt2", 00:13:01.920 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:01.920 "is_configured": true, 00:13:01.920 "data_offset": 2048, 00:13:01.920 "data_size": 63488 00:13:01.920 }, 00:13:01.920 { 00:13:01.920 "name": null, 00:13:01.920 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:01.920 "is_configured": false, 00:13:01.920 "data_offset": 2048, 00:13:01.920 "data_size": 63488 00:13:01.920 } 00:13:01.920 ] 00:13:01.920 }' 00:13:01.920 18:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.920 18:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.508 18:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:02.508 18:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:02.508 18:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:13:02.508 18:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:02.508 18:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.508 18:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.508 [2024-12-06 18:11:27.851714] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:02.508 [2024-12-06 18:11:27.852081] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:02.508 [2024-12-06 18:11:27.852134] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:02.508 [2024-12-06 18:11:27.852155] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:02.508 [2024-12-06 18:11:27.852726] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:02.508 [2024-12-06 18:11:27.852782] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:02.508 [2024-12-06 18:11:27.852898] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:02.508 [2024-12-06 18:11:27.852952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:02.508 [2024-12-06 18:11:27.853097] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:02.508 [2024-12-06 18:11:27.853118] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:02.508 [2024-12-06 18:11:27.853438] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:02.508 [2024-12-06 18:11:27.853646] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:02.508 [2024-12-06 18:11:27.853672] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:02.508 [2024-12-06 18:11:27.853864] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:02.508 pt3 00:13:02.509 18:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.509 18:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:02.509 18:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:02.509 18:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:02.509 18:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:02.509 18:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:02.509 18:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:02.509 18:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.509 18:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.509 18:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.509 18:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.509 18:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.509 18:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.509 18:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.509 18:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.509 18:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.509 18:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.509 "name": "raid_bdev1", 00:13:02.509 "uuid": "86fe3281-9245-421f-8d6f-48d564aafc10", 00:13:02.509 "strip_size_kb": 0, 00:13:02.509 "state": "online", 00:13:02.509 "raid_level": "raid1", 00:13:02.509 "superblock": true, 00:13:02.509 "num_base_bdevs": 3, 00:13:02.509 "num_base_bdevs_discovered": 2, 00:13:02.509 "num_base_bdevs_operational": 2, 00:13:02.509 "base_bdevs_list": [ 00:13:02.509 { 00:13:02.509 "name": null, 00:13:02.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.509 "is_configured": false, 00:13:02.509 "data_offset": 2048, 00:13:02.509 "data_size": 63488 00:13:02.509 }, 00:13:02.509 { 00:13:02.509 "name": "pt2", 00:13:02.509 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:02.509 "is_configured": true, 00:13:02.509 "data_offset": 2048, 00:13:02.509 "data_size": 63488 00:13:02.509 }, 00:13:02.509 { 00:13:02.509 "name": "pt3", 00:13:02.509 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:02.509 "is_configured": true, 00:13:02.509 "data_offset": 2048, 00:13:02.509 "data_size": 63488 00:13:02.509 } 00:13:02.509 ] 00:13:02.509 }' 00:13:02.509 18:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.509 18:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.075 18:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:03.075 18:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.075 18:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.075 [2024-12-06 18:11:28.367838] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:03.075 [2024-12-06 18:11:28.367888] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:03.075 [2024-12-06 18:11:28.367982] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:03.075 [2024-12-06 18:11:28.368069] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:03.075 [2024-12-06 18:11:28.368094] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:03.075 18:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.075 18:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.075 18:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.075 18:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.075 18:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:13:03.075 18:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.075 18:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:13:03.075 18:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:13:03.075 18:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:13:03.075 18:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:13:03.075 18:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:13:03.075 18:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.075 18:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.075 18:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.075 18:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:03.075 18:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.075 18:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.075 [2024-12-06 18:11:28.439853] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:03.075 [2024-12-06 18:11:28.439911] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.075 [2024-12-06 18:11:28.439938] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:03.075 [2024-12-06 18:11:28.439952] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.075 [2024-12-06 18:11:28.442826] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.075 [2024-12-06 18:11:28.442867] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:03.075 [2024-12-06 18:11:28.442969] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:03.075 [2024-12-06 18:11:28.443030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:03.075 [2024-12-06 18:11:28.443192] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:03.075 [2024-12-06 18:11:28.443216] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:03.075 [2024-12-06 18:11:28.443240] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:13:03.075 [2024-12-06 18:11:28.443309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:03.075 pt1 00:13:03.075 18:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.075 18:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:13:03.075 18:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:13:03.075 18:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:03.075 18:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:03.075 18:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:03.075 18:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:03.075 18:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:03.075 18:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.075 18:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.075 18:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.075 18:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.075 18:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.075 18:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.075 18:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.075 18:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.075 18:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.075 18:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.075 "name": "raid_bdev1", 00:13:03.075 "uuid": "86fe3281-9245-421f-8d6f-48d564aafc10", 00:13:03.075 "strip_size_kb": 0, 00:13:03.075 "state": "configuring", 00:13:03.075 "raid_level": "raid1", 00:13:03.075 "superblock": true, 00:13:03.075 "num_base_bdevs": 3, 00:13:03.075 "num_base_bdevs_discovered": 1, 00:13:03.075 "num_base_bdevs_operational": 2, 00:13:03.075 "base_bdevs_list": [ 00:13:03.075 { 00:13:03.075 "name": null, 00:13:03.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.075 "is_configured": false, 00:13:03.075 "data_offset": 2048, 00:13:03.075 "data_size": 63488 00:13:03.075 }, 00:13:03.075 { 00:13:03.075 "name": "pt2", 00:13:03.075 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:03.075 "is_configured": true, 00:13:03.075 "data_offset": 2048, 00:13:03.075 "data_size": 63488 00:13:03.075 }, 00:13:03.075 { 00:13:03.075 "name": null, 00:13:03.075 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:03.075 "is_configured": false, 00:13:03.075 "data_offset": 2048, 00:13:03.075 "data_size": 63488 00:13:03.075 } 00:13:03.075 ] 00:13:03.075 }' 00:13:03.075 18:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.075 18:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.642 18:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:03.642 18:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:13:03.642 18:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.642 18:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.642 18:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.642 18:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:13:03.642 18:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:03.642 18:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.642 18:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.642 [2024-12-06 18:11:29.036033] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:03.642 [2024-12-06 18:11:29.036124] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.642 [2024-12-06 18:11:29.036159] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:03.642 [2024-12-06 18:11:29.036174] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.642 [2024-12-06 18:11:29.036754] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.642 [2024-12-06 18:11:29.036809] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:03.642 [2024-12-06 18:11:29.036914] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:03.642 [2024-12-06 18:11:29.036952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:03.642 [2024-12-06 18:11:29.037116] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:13:03.642 [2024-12-06 18:11:29.037132] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:03.642 [2024-12-06 18:11:29.037448] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:03.642 [2024-12-06 18:11:29.037651] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:13:03.642 [2024-12-06 18:11:29.037675] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:13:03.642 [2024-12-06 18:11:29.037868] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:03.642 pt3 00:13:03.642 18:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.642 18:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:03.642 18:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:03.643 18:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:03.643 18:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:03.643 18:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:03.643 18:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:03.643 18:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.643 18:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.643 18:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.643 18:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.643 18:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.643 18:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.643 18:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.643 18:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.643 18:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.643 18:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.643 "name": "raid_bdev1", 00:13:03.643 "uuid": "86fe3281-9245-421f-8d6f-48d564aafc10", 00:13:03.643 "strip_size_kb": 0, 00:13:03.643 "state": "online", 00:13:03.643 "raid_level": "raid1", 00:13:03.643 "superblock": true, 00:13:03.643 "num_base_bdevs": 3, 00:13:03.643 "num_base_bdevs_discovered": 2, 00:13:03.643 "num_base_bdevs_operational": 2, 00:13:03.643 "base_bdevs_list": [ 00:13:03.643 { 00:13:03.643 "name": null, 00:13:03.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.643 "is_configured": false, 00:13:03.643 "data_offset": 2048, 00:13:03.643 "data_size": 63488 00:13:03.643 }, 00:13:03.643 { 00:13:03.643 "name": "pt2", 00:13:03.643 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:03.643 "is_configured": true, 00:13:03.643 "data_offset": 2048, 00:13:03.643 "data_size": 63488 00:13:03.643 }, 00:13:03.643 { 00:13:03.643 "name": "pt3", 00:13:03.643 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:03.643 "is_configured": true, 00:13:03.643 "data_offset": 2048, 00:13:03.643 "data_size": 63488 00:13:03.643 } 00:13:03.643 ] 00:13:03.643 }' 00:13:03.643 18:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.643 18:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.209 18:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:04.209 18:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:04.209 18:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.209 18:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.209 18:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.209 18:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:13:04.209 18:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:04.209 18:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.209 18:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.209 18:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:13:04.209 [2024-12-06 18:11:29.584511] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:04.209 18:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.209 18:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 86fe3281-9245-421f-8d6f-48d564aafc10 '!=' 86fe3281-9245-421f-8d6f-48d564aafc10 ']' 00:13:04.209 18:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68772 00:13:04.209 18:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68772 ']' 00:13:04.209 18:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68772 00:13:04.209 18:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:13:04.209 18:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:04.209 18:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68772 00:13:04.209 killing process with pid 68772 00:13:04.209 18:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:04.209 18:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:04.209 18:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68772' 00:13:04.209 18:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68772 00:13:04.209 [2024-12-06 18:11:29.669466] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:04.209 18:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68772 00:13:04.209 [2024-12-06 18:11:29.669581] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:04.209 [2024-12-06 18:11:29.669673] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:04.209 [2024-12-06 18:11:29.669694] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:13:04.467 [2024-12-06 18:11:29.938155] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:05.837 ************************************ 00:13:05.837 END TEST raid_superblock_test 00:13:05.837 ************************************ 00:13:05.837 18:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:05.837 00:13:05.837 real 0m8.553s 00:13:05.837 user 0m13.996s 00:13:05.837 sys 0m1.182s 00:13:05.837 18:11:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:05.837 18:11:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.837 18:11:31 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:13:05.837 18:11:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:05.837 18:11:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:05.837 18:11:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:05.837 ************************************ 00:13:05.837 START TEST raid_read_error_test 00:13:05.837 ************************************ 00:13:05.837 18:11:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:13:05.837 18:11:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:05.837 18:11:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:13:05.837 18:11:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:05.837 18:11:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:05.837 18:11:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:05.837 18:11:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:05.837 18:11:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:05.837 18:11:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:05.837 18:11:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:05.837 18:11:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:05.837 18:11:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:05.837 18:11:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:05.837 18:11:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:05.837 18:11:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:05.837 18:11:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:05.837 18:11:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:05.837 18:11:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:05.837 18:11:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:05.838 18:11:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:05.838 18:11:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:05.838 18:11:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:05.838 18:11:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:05.838 18:11:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:05.838 18:11:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:05.838 18:11:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.RDVktbEa4I 00:13:05.838 18:11:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69223 00:13:05.838 18:11:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69223 00:13:05.838 18:11:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69223 ']' 00:13:05.838 18:11:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:05.838 18:11:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:05.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:05.838 18:11:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:05.838 18:11:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:05.838 18:11:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:05.838 18:11:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.838 [2024-12-06 18:11:31.165488] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:13:05.838 [2024-12-06 18:11:31.165698] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69223 ] 00:13:05.838 [2024-12-06 18:11:31.349025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:06.094 [2024-12-06 18:11:31.480552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.352 [2024-12-06 18:11:31.684155] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:06.352 [2024-12-06 18:11:31.684236] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:06.919 18:11:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:06.919 18:11:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:06.919 18:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:06.919 18:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:06.919 18:11:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.920 BaseBdev1_malloc 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.920 true 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.920 [2024-12-06 18:11:32.198297] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:06.920 [2024-12-06 18:11:32.198364] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.920 [2024-12-06 18:11:32.198392] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:06.920 [2024-12-06 18:11:32.198410] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.920 [2024-12-06 18:11:32.201145] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.920 [2024-12-06 18:11:32.201197] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:06.920 BaseBdev1 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.920 BaseBdev2_malloc 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.920 true 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.920 [2024-12-06 18:11:32.253981] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:06.920 [2024-12-06 18:11:32.254045] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.920 [2024-12-06 18:11:32.254069] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:06.920 [2024-12-06 18:11:32.254086] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.920 [2024-12-06 18:11:32.256946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.920 [2024-12-06 18:11:32.256998] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:06.920 BaseBdev2 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.920 BaseBdev3_malloc 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.920 true 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.920 [2024-12-06 18:11:32.322632] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:06.920 [2024-12-06 18:11:32.322724] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.920 [2024-12-06 18:11:32.322751] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:06.920 [2024-12-06 18:11:32.322782] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.920 [2024-12-06 18:11:32.325650] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.920 [2024-12-06 18:11:32.325699] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:06.920 BaseBdev3 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.920 [2024-12-06 18:11:32.330747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:06.920 [2024-12-06 18:11:32.333223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:06.920 [2024-12-06 18:11:32.333334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:06.920 [2024-12-06 18:11:32.333632] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:06.920 [2024-12-06 18:11:32.333668] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:06.920 [2024-12-06 18:11:32.333998] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:13:06.920 [2024-12-06 18:11:32.334243] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:06.920 [2024-12-06 18:11:32.334271] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:06.920 [2024-12-06 18:11:32.334468] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.920 "name": "raid_bdev1", 00:13:06.920 "uuid": "c474f176-b40a-46f9-8cef-07dd42558c0c", 00:13:06.920 "strip_size_kb": 0, 00:13:06.920 "state": "online", 00:13:06.920 "raid_level": "raid1", 00:13:06.920 "superblock": true, 00:13:06.920 "num_base_bdevs": 3, 00:13:06.920 "num_base_bdevs_discovered": 3, 00:13:06.920 "num_base_bdevs_operational": 3, 00:13:06.920 "base_bdevs_list": [ 00:13:06.920 { 00:13:06.920 "name": "BaseBdev1", 00:13:06.920 "uuid": "68a0e04f-465c-5eca-99b8-683c6aca1c1c", 00:13:06.920 "is_configured": true, 00:13:06.920 "data_offset": 2048, 00:13:06.920 "data_size": 63488 00:13:06.920 }, 00:13:06.920 { 00:13:06.920 "name": "BaseBdev2", 00:13:06.920 "uuid": "7ff3a5e8-0789-5bb4-baa4-c7f415f0def1", 00:13:06.920 "is_configured": true, 00:13:06.920 "data_offset": 2048, 00:13:06.920 "data_size": 63488 00:13:06.920 }, 00:13:06.920 { 00:13:06.920 "name": "BaseBdev3", 00:13:06.920 "uuid": "16ffd7fb-9d46-51fe-9e2c-34e5329ab5a0", 00:13:06.920 "is_configured": true, 00:13:06.920 "data_offset": 2048, 00:13:06.920 "data_size": 63488 00:13:06.920 } 00:13:06.920 ] 00:13:06.920 }' 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.920 18:11:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.487 18:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:07.487 18:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:07.487 [2024-12-06 18:11:32.952287] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:13:08.419 18:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:08.419 18:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.419 18:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.419 18:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.419 18:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:08.419 18:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:08.420 18:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:13:08.420 18:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:13:08.420 18:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:08.420 18:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:08.420 18:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:08.420 18:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:08.420 18:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:08.420 18:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:08.420 18:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.420 18:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.420 18:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.420 18:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.420 18:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.420 18:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.420 18:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.420 18:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.420 18:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.420 18:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.420 "name": "raid_bdev1", 00:13:08.420 "uuid": "c474f176-b40a-46f9-8cef-07dd42558c0c", 00:13:08.420 "strip_size_kb": 0, 00:13:08.420 "state": "online", 00:13:08.420 "raid_level": "raid1", 00:13:08.420 "superblock": true, 00:13:08.420 "num_base_bdevs": 3, 00:13:08.420 "num_base_bdevs_discovered": 3, 00:13:08.420 "num_base_bdevs_operational": 3, 00:13:08.420 "base_bdevs_list": [ 00:13:08.420 { 00:13:08.420 "name": "BaseBdev1", 00:13:08.420 "uuid": "68a0e04f-465c-5eca-99b8-683c6aca1c1c", 00:13:08.420 "is_configured": true, 00:13:08.420 "data_offset": 2048, 00:13:08.420 "data_size": 63488 00:13:08.420 }, 00:13:08.420 { 00:13:08.420 "name": "BaseBdev2", 00:13:08.420 "uuid": "7ff3a5e8-0789-5bb4-baa4-c7f415f0def1", 00:13:08.420 "is_configured": true, 00:13:08.420 "data_offset": 2048, 00:13:08.420 "data_size": 63488 00:13:08.420 }, 00:13:08.420 { 00:13:08.420 "name": "BaseBdev3", 00:13:08.420 "uuid": "16ffd7fb-9d46-51fe-9e2c-34e5329ab5a0", 00:13:08.420 "is_configured": true, 00:13:08.420 "data_offset": 2048, 00:13:08.420 "data_size": 63488 00:13:08.420 } 00:13:08.420 ] 00:13:08.420 }' 00:13:08.420 18:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.420 18:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.988 18:11:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:08.988 18:11:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.988 18:11:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.988 [2024-12-06 18:11:34.353113] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:08.988 [2024-12-06 18:11:34.353151] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:08.988 [2024-12-06 18:11:34.356648] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:08.988 [2024-12-06 18:11:34.356718] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:08.988 [2024-12-06 18:11:34.356901] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:08.988 [2024-12-06 18:11:34.356927] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:08.988 { 00:13:08.988 "results": [ 00:13:08.988 { 00:13:08.988 "job": "raid_bdev1", 00:13:08.988 "core_mask": "0x1", 00:13:08.988 "workload": "randrw", 00:13:08.988 "percentage": 50, 00:13:08.988 "status": "finished", 00:13:08.988 "queue_depth": 1, 00:13:08.988 "io_size": 131072, 00:13:08.988 "runtime": 1.398566, 00:13:08.988 "iops": 9334.561257745434, 00:13:08.988 "mibps": 1166.8201572181792, 00:13:08.988 "io_failed": 0, 00:13:08.988 "io_timeout": 0, 00:13:08.988 "avg_latency_us": 102.7113340064761, 00:13:08.988 "min_latency_us": 43.28727272727273, 00:13:08.988 "max_latency_us": 1936.290909090909 00:13:08.988 } 00:13:08.988 ], 00:13:08.988 "core_count": 1 00:13:08.988 } 00:13:08.988 18:11:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.988 18:11:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69223 00:13:08.988 18:11:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69223 ']' 00:13:08.988 18:11:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69223 00:13:08.988 18:11:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:13:08.988 18:11:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:08.988 18:11:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69223 00:13:08.988 18:11:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:08.988 18:11:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:08.988 18:11:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69223' 00:13:08.988 killing process with pid 69223 00:13:08.988 18:11:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69223 00:13:08.988 [2024-12-06 18:11:34.388101] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:08.988 18:11:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69223 00:13:09.248 [2024-12-06 18:11:34.595998] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:10.623 18:11:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.RDVktbEa4I 00:13:10.623 18:11:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:10.623 18:11:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:10.623 18:11:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:10.623 18:11:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:10.623 18:11:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:10.623 18:11:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:10.623 18:11:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:10.623 00:13:10.623 real 0m4.671s 00:13:10.623 user 0m5.796s 00:13:10.623 sys 0m0.559s 00:13:10.623 18:11:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:10.623 18:11:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.623 ************************************ 00:13:10.623 END TEST raid_read_error_test 00:13:10.623 ************************************ 00:13:10.623 18:11:35 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:13:10.623 18:11:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:10.623 18:11:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:10.623 18:11:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:10.623 ************************************ 00:13:10.623 START TEST raid_write_error_test 00:13:10.623 ************************************ 00:13:10.623 18:11:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:13:10.623 18:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:10.623 18:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:13:10.623 18:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:10.623 18:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:10.623 18:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:10.623 18:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:10.623 18:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:10.623 18:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:10.623 18:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:10.623 18:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:10.623 18:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:10.623 18:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:10.624 18:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:10.624 18:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:10.624 18:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:10.624 18:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:10.624 18:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:10.624 18:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:10.624 18:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:10.624 18:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:10.624 18:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:10.624 18:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:10.624 18:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:10.624 18:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:10.624 18:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.NjeKgEWOOE 00:13:10.624 18:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69370 00:13:10.624 18:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:10.624 18:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69370 00:13:10.624 18:11:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69370 ']' 00:13:10.624 18:11:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.624 18:11:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:10.624 18:11:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.624 18:11:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:10.624 18:11:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.624 [2024-12-06 18:11:35.909244] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:13:10.624 [2024-12-06 18:11:35.909434] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69370 ] 00:13:10.624 [2024-12-06 18:11:36.099000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.882 [2024-12-06 18:11:36.249654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.141 [2024-12-06 18:11:36.454523] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:11.142 [2024-12-06 18:11:36.454560] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:11.710 18:11:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:11.710 18:11:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:11.710 18:11:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:11.710 18:11:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:11.710 18:11:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.710 18:11:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.710 BaseBdev1_malloc 00:13:11.710 18:11:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.710 18:11:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:11.710 18:11:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.710 18:11:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.710 true 00:13:11.710 18:11:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.710 18:11:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:11.710 18:11:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.710 18:11:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.710 [2024-12-06 18:11:36.977387] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:11.710 [2024-12-06 18:11:36.977452] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.710 [2024-12-06 18:11:36.977482] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:11.710 [2024-12-06 18:11:36.977500] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.710 [2024-12-06 18:11:36.980284] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.710 [2024-12-06 18:11:36.980332] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:11.710 BaseBdev1 00:13:11.710 18:11:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.710 18:11:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:11.710 18:11:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:11.710 18:11:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.710 18:11:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.710 BaseBdev2_malloc 00:13:11.710 18:11:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.710 18:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:11.710 18:11:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.710 18:11:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.710 true 00:13:11.710 18:11:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.710 18:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:11.710 18:11:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.710 18:11:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.710 [2024-12-06 18:11:37.031803] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:11.710 [2024-12-06 18:11:37.031897] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.710 [2024-12-06 18:11:37.031923] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:11.710 [2024-12-06 18:11:37.031941] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.710 [2024-12-06 18:11:37.034633] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.710 [2024-12-06 18:11:37.034724] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:11.710 BaseBdev2 00:13:11.710 18:11:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.710 18:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:11.710 18:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:11.710 18:11:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.710 18:11:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.710 BaseBdev3_malloc 00:13:11.710 18:11:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.710 18:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:11.710 18:11:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.710 18:11:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.710 true 00:13:11.710 18:11:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.710 18:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:11.710 18:11:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.710 18:11:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.710 [2024-12-06 18:11:37.098334] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:11.710 [2024-12-06 18:11:37.098439] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.710 [2024-12-06 18:11:37.098466] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:11.710 [2024-12-06 18:11:37.098484] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.710 [2024-12-06 18:11:37.101384] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.710 [2024-12-06 18:11:37.101472] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:11.710 BaseBdev3 00:13:11.710 18:11:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.710 18:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:13:11.710 18:11:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.710 18:11:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.710 [2024-12-06 18:11:37.106480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:11.710 [2024-12-06 18:11:37.109064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:11.710 [2024-12-06 18:11:37.109172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:11.710 [2024-12-06 18:11:37.109450] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:11.710 [2024-12-06 18:11:37.109478] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:11.710 [2024-12-06 18:11:37.109812] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:13:11.710 [2024-12-06 18:11:37.110056] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:11.710 [2024-12-06 18:11:37.110084] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:11.710 [2024-12-06 18:11:37.110269] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:11.710 18:11:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.710 18:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:11.710 18:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.710 18:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.710 18:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.710 18:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.710 18:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:11.710 18:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.710 18:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.710 18:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.710 18:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.710 18:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.710 18:11:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.710 18:11:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.710 18:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.710 18:11:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.710 18:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.710 "name": "raid_bdev1", 00:13:11.710 "uuid": "90088898-9785-48fc-a662-1b8b7aae10c1", 00:13:11.710 "strip_size_kb": 0, 00:13:11.710 "state": "online", 00:13:11.710 "raid_level": "raid1", 00:13:11.710 "superblock": true, 00:13:11.710 "num_base_bdevs": 3, 00:13:11.710 "num_base_bdevs_discovered": 3, 00:13:11.710 "num_base_bdevs_operational": 3, 00:13:11.710 "base_bdevs_list": [ 00:13:11.710 { 00:13:11.710 "name": "BaseBdev1", 00:13:11.710 "uuid": "b9cba5ae-e72d-5cf2-8c79-3eeab901897e", 00:13:11.710 "is_configured": true, 00:13:11.710 "data_offset": 2048, 00:13:11.710 "data_size": 63488 00:13:11.710 }, 00:13:11.710 { 00:13:11.710 "name": "BaseBdev2", 00:13:11.710 "uuid": "e206bf04-0761-5e12-a36d-187bd488c6dc", 00:13:11.710 "is_configured": true, 00:13:11.710 "data_offset": 2048, 00:13:11.710 "data_size": 63488 00:13:11.710 }, 00:13:11.710 { 00:13:11.710 "name": "BaseBdev3", 00:13:11.711 "uuid": "be517c55-e665-5967-9be8-b50af19c84dc", 00:13:11.711 "is_configured": true, 00:13:11.711 "data_offset": 2048, 00:13:11.711 "data_size": 63488 00:13:11.711 } 00:13:11.711 ] 00:13:11.711 }' 00:13:11.711 18:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.711 18:11:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.277 18:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:12.277 18:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:12.278 [2024-12-06 18:11:37.716082] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:13:13.214 18:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:13.214 18:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.214 18:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.214 [2024-12-06 18:11:38.588460] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:13:13.214 [2024-12-06 18:11:38.588523] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:13.214 [2024-12-06 18:11:38.588801] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:13:13.214 18:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.214 18:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:13.214 18:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:13.214 18:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:13:13.214 18:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:13:13.214 18:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:13.214 18:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:13.214 18:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:13.214 18:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:13.214 18:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:13.215 18:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:13.215 18:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.215 18:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.215 18:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.215 18:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.215 18:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.215 18:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.215 18:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.215 18:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.215 18:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.215 18:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.215 "name": "raid_bdev1", 00:13:13.215 "uuid": "90088898-9785-48fc-a662-1b8b7aae10c1", 00:13:13.215 "strip_size_kb": 0, 00:13:13.215 "state": "online", 00:13:13.215 "raid_level": "raid1", 00:13:13.215 "superblock": true, 00:13:13.215 "num_base_bdevs": 3, 00:13:13.215 "num_base_bdevs_discovered": 2, 00:13:13.215 "num_base_bdevs_operational": 2, 00:13:13.215 "base_bdevs_list": [ 00:13:13.215 { 00:13:13.215 "name": null, 00:13:13.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.215 "is_configured": false, 00:13:13.215 "data_offset": 0, 00:13:13.215 "data_size": 63488 00:13:13.215 }, 00:13:13.215 { 00:13:13.215 "name": "BaseBdev2", 00:13:13.215 "uuid": "e206bf04-0761-5e12-a36d-187bd488c6dc", 00:13:13.215 "is_configured": true, 00:13:13.215 "data_offset": 2048, 00:13:13.215 "data_size": 63488 00:13:13.215 }, 00:13:13.215 { 00:13:13.215 "name": "BaseBdev3", 00:13:13.215 "uuid": "be517c55-e665-5967-9be8-b50af19c84dc", 00:13:13.215 "is_configured": true, 00:13:13.215 "data_offset": 2048, 00:13:13.215 "data_size": 63488 00:13:13.215 } 00:13:13.215 ] 00:13:13.215 }' 00:13:13.215 18:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.215 18:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.782 18:11:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:13.782 18:11:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.782 18:11:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.782 [2024-12-06 18:11:39.130144] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:13.782 [2024-12-06 18:11:39.130187] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:13.782 [2024-12-06 18:11:39.133523] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:13.782 [2024-12-06 18:11:39.133613] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:13.782 [2024-12-06 18:11:39.133721] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:13.782 [2024-12-06 18:11:39.133745] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:13.782 18:11:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.782 18:11:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69370 00:13:13.782 { 00:13:13.782 "results": [ 00:13:13.782 { 00:13:13.782 "job": "raid_bdev1", 00:13:13.782 "core_mask": "0x1", 00:13:13.782 "workload": "randrw", 00:13:13.782 "percentage": 50, 00:13:13.782 "status": "finished", 00:13:13.782 "queue_depth": 1, 00:13:13.782 "io_size": 131072, 00:13:13.782 "runtime": 1.411577, 00:13:13.782 "iops": 10647.665695884816, 00:13:13.782 "mibps": 1330.958211985602, 00:13:13.782 "io_failed": 0, 00:13:13.782 "io_timeout": 0, 00:13:13.782 "avg_latency_us": 89.52555978951189, 00:13:13.782 "min_latency_us": 40.49454545454545, 00:13:13.782 "max_latency_us": 1854.370909090909 00:13:13.782 } 00:13:13.782 ], 00:13:13.782 "core_count": 1 00:13:13.782 } 00:13:13.782 18:11:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69370 ']' 00:13:13.783 18:11:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69370 00:13:13.783 18:11:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:13:13.783 18:11:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:13.783 18:11:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69370 00:13:13.783 18:11:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:13.783 18:11:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:13.783 killing process with pid 69370 00:13:13.783 18:11:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69370' 00:13:13.783 18:11:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69370 00:13:13.783 [2024-12-06 18:11:39.170713] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:13.783 18:11:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69370 00:13:14.041 [2024-12-06 18:11:39.373652] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:14.977 18:11:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:14.977 18:11:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.NjeKgEWOOE 00:13:14.977 18:11:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:14.977 18:11:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:14.977 18:11:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:14.977 18:11:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:14.977 18:11:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:14.977 18:11:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:14.977 00:13:14.977 real 0m4.709s 00:13:14.977 user 0m5.850s 00:13:14.977 sys 0m0.592s 00:13:14.977 18:11:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:14.977 18:11:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.977 ************************************ 00:13:14.977 END TEST raid_write_error_test 00:13:14.977 ************************************ 00:13:15.236 18:11:40 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:13:15.236 18:11:40 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:15.236 18:11:40 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:13:15.236 18:11:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:15.236 18:11:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:15.236 18:11:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:15.236 ************************************ 00:13:15.236 START TEST raid_state_function_test 00:13:15.236 ************************************ 00:13:15.236 18:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:13:15.236 18:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:13:15.236 18:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:15.236 18:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:15.236 18:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:15.236 18:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:15.236 18:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:15.236 18:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:15.236 18:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:15.236 18:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:15.236 18:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:15.236 18:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:15.236 18:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:15.236 18:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:15.236 18:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:15.236 18:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:15.236 18:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:15.236 18:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:15.236 18:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:15.236 18:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:15.236 18:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:15.236 18:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:15.236 18:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:15.236 18:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:15.236 18:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:15.236 18:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:13:15.236 18:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:15.236 18:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:15.236 18:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:15.236 18:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:15.236 18:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69518 00:13:15.236 Process raid pid: 69518 00:13:15.236 18:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69518' 00:13:15.236 18:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:15.236 18:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69518 00:13:15.236 18:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69518 ']' 00:13:15.236 18:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.236 18:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:15.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.236 18:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.236 18:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:15.236 18:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.236 [2024-12-06 18:11:40.636330] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:13:15.236 [2024-12-06 18:11:40.636513] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:15.495 [2024-12-06 18:11:40.821519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.495 [2024-12-06 18:11:40.950136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.754 [2024-12-06 18:11:41.157692] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:15.754 [2024-12-06 18:11:41.157738] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:16.323 18:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:16.323 18:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:13:16.323 18:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:16.323 18:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.323 18:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.323 [2024-12-06 18:11:41.620914] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:16.323 [2024-12-06 18:11:41.620988] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:16.323 [2024-12-06 18:11:41.621005] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:16.323 [2024-12-06 18:11:41.621022] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:16.323 [2024-12-06 18:11:41.621032] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:16.323 [2024-12-06 18:11:41.621046] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:16.323 [2024-12-06 18:11:41.621056] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:16.323 [2024-12-06 18:11:41.621071] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:16.323 18:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.323 18:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:16.323 18:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:16.323 18:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:16.323 18:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:16.323 18:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:16.323 18:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:16.323 18:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.323 18:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.323 18:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.323 18:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.323 18:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.323 18:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:16.323 18:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.323 18:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.323 18:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.323 18:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.323 "name": "Existed_Raid", 00:13:16.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.323 "strip_size_kb": 64, 00:13:16.323 "state": "configuring", 00:13:16.323 "raid_level": "raid0", 00:13:16.323 "superblock": false, 00:13:16.323 "num_base_bdevs": 4, 00:13:16.323 "num_base_bdevs_discovered": 0, 00:13:16.323 "num_base_bdevs_operational": 4, 00:13:16.323 "base_bdevs_list": [ 00:13:16.323 { 00:13:16.323 "name": "BaseBdev1", 00:13:16.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.323 "is_configured": false, 00:13:16.323 "data_offset": 0, 00:13:16.323 "data_size": 0 00:13:16.323 }, 00:13:16.323 { 00:13:16.323 "name": "BaseBdev2", 00:13:16.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.324 "is_configured": false, 00:13:16.324 "data_offset": 0, 00:13:16.324 "data_size": 0 00:13:16.324 }, 00:13:16.324 { 00:13:16.324 "name": "BaseBdev3", 00:13:16.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.324 "is_configured": false, 00:13:16.324 "data_offset": 0, 00:13:16.324 "data_size": 0 00:13:16.324 }, 00:13:16.324 { 00:13:16.324 "name": "BaseBdev4", 00:13:16.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.324 "is_configured": false, 00:13:16.324 "data_offset": 0, 00:13:16.324 "data_size": 0 00:13:16.324 } 00:13:16.324 ] 00:13:16.324 }' 00:13:16.324 18:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.324 18:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.582 18:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:16.582 18:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.582 18:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.582 [2024-12-06 18:11:42.097037] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:16.582 [2024-12-06 18:11:42.097086] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:16.842 18:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.842 18:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:16.842 18:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.842 18:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.842 [2024-12-06 18:11:42.105029] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:16.842 [2024-12-06 18:11:42.105080] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:16.842 [2024-12-06 18:11:42.105094] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:16.842 [2024-12-06 18:11:42.105110] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:16.842 [2024-12-06 18:11:42.105119] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:16.842 [2024-12-06 18:11:42.105133] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:16.842 [2024-12-06 18:11:42.105143] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:16.842 [2024-12-06 18:11:42.105157] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:16.842 18:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.842 18:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:16.842 18:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.842 18:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.842 [2024-12-06 18:11:42.150249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:16.842 BaseBdev1 00:13:16.842 18:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.842 18:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:16.842 18:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:16.842 18:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:16.842 18:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:16.842 18:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:16.842 18:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:16.842 18:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:16.842 18:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.842 18:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.842 18:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.842 18:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:16.842 18:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.842 18:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.842 [ 00:13:16.842 { 00:13:16.842 "name": "BaseBdev1", 00:13:16.842 "aliases": [ 00:13:16.842 "f9768a3b-20d1-4020-ba31-433136e13179" 00:13:16.842 ], 00:13:16.842 "product_name": "Malloc disk", 00:13:16.842 "block_size": 512, 00:13:16.842 "num_blocks": 65536, 00:13:16.842 "uuid": "f9768a3b-20d1-4020-ba31-433136e13179", 00:13:16.842 "assigned_rate_limits": { 00:13:16.842 "rw_ios_per_sec": 0, 00:13:16.842 "rw_mbytes_per_sec": 0, 00:13:16.842 "r_mbytes_per_sec": 0, 00:13:16.842 "w_mbytes_per_sec": 0 00:13:16.842 }, 00:13:16.842 "claimed": true, 00:13:16.842 "claim_type": "exclusive_write", 00:13:16.842 "zoned": false, 00:13:16.842 "supported_io_types": { 00:13:16.842 "read": true, 00:13:16.842 "write": true, 00:13:16.842 "unmap": true, 00:13:16.842 "flush": true, 00:13:16.842 "reset": true, 00:13:16.842 "nvme_admin": false, 00:13:16.842 "nvme_io": false, 00:13:16.842 "nvme_io_md": false, 00:13:16.842 "write_zeroes": true, 00:13:16.842 "zcopy": true, 00:13:16.842 "get_zone_info": false, 00:13:16.842 "zone_management": false, 00:13:16.842 "zone_append": false, 00:13:16.842 "compare": false, 00:13:16.842 "compare_and_write": false, 00:13:16.842 "abort": true, 00:13:16.842 "seek_hole": false, 00:13:16.842 "seek_data": false, 00:13:16.842 "copy": true, 00:13:16.842 "nvme_iov_md": false 00:13:16.842 }, 00:13:16.842 "memory_domains": [ 00:13:16.842 { 00:13:16.842 "dma_device_id": "system", 00:13:16.842 "dma_device_type": 1 00:13:16.842 }, 00:13:16.842 { 00:13:16.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.842 "dma_device_type": 2 00:13:16.843 } 00:13:16.843 ], 00:13:16.843 "driver_specific": {} 00:13:16.843 } 00:13:16.843 ] 00:13:16.843 18:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.843 18:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:16.843 18:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:16.843 18:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:16.843 18:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:16.843 18:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:16.843 18:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:16.843 18:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:16.843 18:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.843 18:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.843 18:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.843 18:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.843 18:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:16.843 18:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.843 18:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.843 18:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.843 18:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.843 18:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.843 "name": "Existed_Raid", 00:13:16.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.843 "strip_size_kb": 64, 00:13:16.843 "state": "configuring", 00:13:16.843 "raid_level": "raid0", 00:13:16.843 "superblock": false, 00:13:16.843 "num_base_bdevs": 4, 00:13:16.843 "num_base_bdevs_discovered": 1, 00:13:16.843 "num_base_bdevs_operational": 4, 00:13:16.843 "base_bdevs_list": [ 00:13:16.843 { 00:13:16.843 "name": "BaseBdev1", 00:13:16.843 "uuid": "f9768a3b-20d1-4020-ba31-433136e13179", 00:13:16.843 "is_configured": true, 00:13:16.843 "data_offset": 0, 00:13:16.843 "data_size": 65536 00:13:16.843 }, 00:13:16.843 { 00:13:16.843 "name": "BaseBdev2", 00:13:16.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.843 "is_configured": false, 00:13:16.843 "data_offset": 0, 00:13:16.843 "data_size": 0 00:13:16.843 }, 00:13:16.843 { 00:13:16.843 "name": "BaseBdev3", 00:13:16.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.843 "is_configured": false, 00:13:16.843 "data_offset": 0, 00:13:16.843 "data_size": 0 00:13:16.843 }, 00:13:16.843 { 00:13:16.843 "name": "BaseBdev4", 00:13:16.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.843 "is_configured": false, 00:13:16.843 "data_offset": 0, 00:13:16.843 "data_size": 0 00:13:16.843 } 00:13:16.843 ] 00:13:16.843 }' 00:13:16.843 18:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.843 18:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.409 18:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:17.409 18:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.409 18:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.409 [2024-12-06 18:11:42.694455] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:17.409 [2024-12-06 18:11:42.694537] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:17.409 18:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.409 18:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:17.409 18:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.409 18:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.409 [2024-12-06 18:11:42.702505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:17.409 [2024-12-06 18:11:42.704998] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:17.409 [2024-12-06 18:11:42.705052] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:17.409 [2024-12-06 18:11:42.705068] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:17.409 [2024-12-06 18:11:42.705087] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:17.409 [2024-12-06 18:11:42.705104] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:17.409 [2024-12-06 18:11:42.705118] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:17.409 18:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.409 18:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:17.409 18:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:17.409 18:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:17.409 18:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:17.409 18:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:17.409 18:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:17.409 18:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:17.409 18:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:17.409 18:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.409 18:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.409 18:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.409 18:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.409 18:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.409 18:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.409 18:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.409 18:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.409 18:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.409 18:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.409 "name": "Existed_Raid", 00:13:17.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.409 "strip_size_kb": 64, 00:13:17.409 "state": "configuring", 00:13:17.409 "raid_level": "raid0", 00:13:17.409 "superblock": false, 00:13:17.409 "num_base_bdevs": 4, 00:13:17.409 "num_base_bdevs_discovered": 1, 00:13:17.409 "num_base_bdevs_operational": 4, 00:13:17.409 "base_bdevs_list": [ 00:13:17.409 { 00:13:17.409 "name": "BaseBdev1", 00:13:17.409 "uuid": "f9768a3b-20d1-4020-ba31-433136e13179", 00:13:17.409 "is_configured": true, 00:13:17.409 "data_offset": 0, 00:13:17.409 "data_size": 65536 00:13:17.409 }, 00:13:17.409 { 00:13:17.409 "name": "BaseBdev2", 00:13:17.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.409 "is_configured": false, 00:13:17.409 "data_offset": 0, 00:13:17.409 "data_size": 0 00:13:17.409 }, 00:13:17.409 { 00:13:17.409 "name": "BaseBdev3", 00:13:17.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.409 "is_configured": false, 00:13:17.409 "data_offset": 0, 00:13:17.409 "data_size": 0 00:13:17.409 }, 00:13:17.409 { 00:13:17.409 "name": "BaseBdev4", 00:13:17.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.409 "is_configured": false, 00:13:17.409 "data_offset": 0, 00:13:17.409 "data_size": 0 00:13:17.409 } 00:13:17.409 ] 00:13:17.409 }' 00:13:17.409 18:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.409 18:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.975 18:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:17.975 18:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.975 18:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.975 [2024-12-06 18:11:43.301798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:17.975 BaseBdev2 00:13:17.975 18:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.975 18:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:17.975 18:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:17.975 18:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:17.975 18:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:17.975 18:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:17.975 18:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:17.975 18:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:17.975 18:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.975 18:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.975 18:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.975 18:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:17.975 18:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.975 18:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.975 [ 00:13:17.975 { 00:13:17.975 "name": "BaseBdev2", 00:13:17.975 "aliases": [ 00:13:17.975 "8808a9c7-b780-462a-85c5-d9a218f8ba49" 00:13:17.975 ], 00:13:17.975 "product_name": "Malloc disk", 00:13:17.976 "block_size": 512, 00:13:17.976 "num_blocks": 65536, 00:13:17.976 "uuid": "8808a9c7-b780-462a-85c5-d9a218f8ba49", 00:13:17.976 "assigned_rate_limits": { 00:13:17.976 "rw_ios_per_sec": 0, 00:13:17.976 "rw_mbytes_per_sec": 0, 00:13:17.976 "r_mbytes_per_sec": 0, 00:13:17.976 "w_mbytes_per_sec": 0 00:13:17.976 }, 00:13:17.976 "claimed": true, 00:13:17.976 "claim_type": "exclusive_write", 00:13:17.976 "zoned": false, 00:13:17.976 "supported_io_types": { 00:13:17.976 "read": true, 00:13:17.976 "write": true, 00:13:17.976 "unmap": true, 00:13:17.976 "flush": true, 00:13:17.976 "reset": true, 00:13:17.976 "nvme_admin": false, 00:13:17.976 "nvme_io": false, 00:13:17.976 "nvme_io_md": false, 00:13:17.976 "write_zeroes": true, 00:13:17.976 "zcopy": true, 00:13:17.976 "get_zone_info": false, 00:13:17.976 "zone_management": false, 00:13:17.976 "zone_append": false, 00:13:17.976 "compare": false, 00:13:17.976 "compare_and_write": false, 00:13:17.976 "abort": true, 00:13:17.976 "seek_hole": false, 00:13:17.976 "seek_data": false, 00:13:17.976 "copy": true, 00:13:17.976 "nvme_iov_md": false 00:13:17.976 }, 00:13:17.976 "memory_domains": [ 00:13:17.976 { 00:13:17.976 "dma_device_id": "system", 00:13:17.976 "dma_device_type": 1 00:13:17.976 }, 00:13:17.976 { 00:13:17.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.976 "dma_device_type": 2 00:13:17.976 } 00:13:17.976 ], 00:13:17.976 "driver_specific": {} 00:13:17.976 } 00:13:17.976 ] 00:13:17.976 18:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.976 18:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:17.976 18:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:17.976 18:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:17.976 18:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:17.976 18:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:17.976 18:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:17.976 18:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:17.976 18:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:17.976 18:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:17.976 18:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.976 18:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.976 18:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.976 18:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.976 18:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.976 18:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.976 18:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.976 18:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.976 18:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.976 18:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.976 "name": "Existed_Raid", 00:13:17.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.976 "strip_size_kb": 64, 00:13:17.976 "state": "configuring", 00:13:17.976 "raid_level": "raid0", 00:13:17.976 "superblock": false, 00:13:17.976 "num_base_bdevs": 4, 00:13:17.976 "num_base_bdevs_discovered": 2, 00:13:17.976 "num_base_bdevs_operational": 4, 00:13:17.976 "base_bdevs_list": [ 00:13:17.976 { 00:13:17.976 "name": "BaseBdev1", 00:13:17.976 "uuid": "f9768a3b-20d1-4020-ba31-433136e13179", 00:13:17.976 "is_configured": true, 00:13:17.976 "data_offset": 0, 00:13:17.976 "data_size": 65536 00:13:17.976 }, 00:13:17.976 { 00:13:17.976 "name": "BaseBdev2", 00:13:17.976 "uuid": "8808a9c7-b780-462a-85c5-d9a218f8ba49", 00:13:17.976 "is_configured": true, 00:13:17.976 "data_offset": 0, 00:13:17.976 "data_size": 65536 00:13:17.976 }, 00:13:17.976 { 00:13:17.976 "name": "BaseBdev3", 00:13:17.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.976 "is_configured": false, 00:13:17.976 "data_offset": 0, 00:13:17.976 "data_size": 0 00:13:17.976 }, 00:13:17.976 { 00:13:17.976 "name": "BaseBdev4", 00:13:17.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.976 "is_configured": false, 00:13:17.976 "data_offset": 0, 00:13:17.976 "data_size": 0 00:13:17.976 } 00:13:17.976 ] 00:13:17.976 }' 00:13:17.976 18:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.976 18:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.542 18:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:18.542 18:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.542 18:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.542 [2024-12-06 18:11:43.912376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:18.542 BaseBdev3 00:13:18.542 18:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.542 18:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:18.542 18:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:18.542 18:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:18.542 18:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:18.542 18:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:18.542 18:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:18.542 18:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:18.542 18:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.542 18:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.542 18:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.542 18:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:18.542 18:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.542 18:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.542 [ 00:13:18.542 { 00:13:18.542 "name": "BaseBdev3", 00:13:18.542 "aliases": [ 00:13:18.542 "1c94a8fb-f943-4e68-ae92-651ca61b882d" 00:13:18.542 ], 00:13:18.542 "product_name": "Malloc disk", 00:13:18.542 "block_size": 512, 00:13:18.542 "num_blocks": 65536, 00:13:18.542 "uuid": "1c94a8fb-f943-4e68-ae92-651ca61b882d", 00:13:18.542 "assigned_rate_limits": { 00:13:18.542 "rw_ios_per_sec": 0, 00:13:18.542 "rw_mbytes_per_sec": 0, 00:13:18.542 "r_mbytes_per_sec": 0, 00:13:18.542 "w_mbytes_per_sec": 0 00:13:18.542 }, 00:13:18.542 "claimed": true, 00:13:18.542 "claim_type": "exclusive_write", 00:13:18.542 "zoned": false, 00:13:18.542 "supported_io_types": { 00:13:18.542 "read": true, 00:13:18.542 "write": true, 00:13:18.542 "unmap": true, 00:13:18.542 "flush": true, 00:13:18.542 "reset": true, 00:13:18.542 "nvme_admin": false, 00:13:18.542 "nvme_io": false, 00:13:18.542 "nvme_io_md": false, 00:13:18.542 "write_zeroes": true, 00:13:18.542 "zcopy": true, 00:13:18.542 "get_zone_info": false, 00:13:18.542 "zone_management": false, 00:13:18.542 "zone_append": false, 00:13:18.542 "compare": false, 00:13:18.542 "compare_and_write": false, 00:13:18.542 "abort": true, 00:13:18.542 "seek_hole": false, 00:13:18.542 "seek_data": false, 00:13:18.542 "copy": true, 00:13:18.542 "nvme_iov_md": false 00:13:18.542 }, 00:13:18.542 "memory_domains": [ 00:13:18.542 { 00:13:18.542 "dma_device_id": "system", 00:13:18.542 "dma_device_type": 1 00:13:18.542 }, 00:13:18.542 { 00:13:18.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.542 "dma_device_type": 2 00:13:18.542 } 00:13:18.542 ], 00:13:18.542 "driver_specific": {} 00:13:18.542 } 00:13:18.542 ] 00:13:18.542 18:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.542 18:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:18.542 18:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:18.542 18:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:18.542 18:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:18.542 18:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:18.542 18:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:18.542 18:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:18.542 18:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:18.542 18:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:18.542 18:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.542 18:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.542 18:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.542 18:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.542 18:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.542 18:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.542 18:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.542 18:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.542 18:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.542 18:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.542 "name": "Existed_Raid", 00:13:18.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.542 "strip_size_kb": 64, 00:13:18.542 "state": "configuring", 00:13:18.542 "raid_level": "raid0", 00:13:18.542 "superblock": false, 00:13:18.542 "num_base_bdevs": 4, 00:13:18.542 "num_base_bdevs_discovered": 3, 00:13:18.542 "num_base_bdevs_operational": 4, 00:13:18.542 "base_bdevs_list": [ 00:13:18.542 { 00:13:18.542 "name": "BaseBdev1", 00:13:18.542 "uuid": "f9768a3b-20d1-4020-ba31-433136e13179", 00:13:18.542 "is_configured": true, 00:13:18.542 "data_offset": 0, 00:13:18.542 "data_size": 65536 00:13:18.542 }, 00:13:18.542 { 00:13:18.542 "name": "BaseBdev2", 00:13:18.542 "uuid": "8808a9c7-b780-462a-85c5-d9a218f8ba49", 00:13:18.542 "is_configured": true, 00:13:18.542 "data_offset": 0, 00:13:18.542 "data_size": 65536 00:13:18.542 }, 00:13:18.542 { 00:13:18.542 "name": "BaseBdev3", 00:13:18.542 "uuid": "1c94a8fb-f943-4e68-ae92-651ca61b882d", 00:13:18.542 "is_configured": true, 00:13:18.542 "data_offset": 0, 00:13:18.542 "data_size": 65536 00:13:18.542 }, 00:13:18.542 { 00:13:18.542 "name": "BaseBdev4", 00:13:18.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.542 "is_configured": false, 00:13:18.542 "data_offset": 0, 00:13:18.542 "data_size": 0 00:13:18.542 } 00:13:18.542 ] 00:13:18.542 }' 00:13:18.542 18:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.542 18:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.108 18:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:19.108 18:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.108 18:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.109 [2024-12-06 18:11:44.523662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:19.109 [2024-12-06 18:11:44.523734] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:19.109 [2024-12-06 18:11:44.523748] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:13:19.109 [2024-12-06 18:11:44.524182] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:19.109 BaseBdev4 00:13:19.109 [2024-12-06 18:11:44.524424] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:19.109 [2024-12-06 18:11:44.524452] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:19.109 [2024-12-06 18:11:44.524792] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:19.109 18:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.109 18:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:19.109 18:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:19.109 18:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:19.109 18:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:19.109 18:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:19.109 18:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:19.109 18:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:19.109 18:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.109 18:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.109 18:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.109 18:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:19.109 18:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.109 18:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.109 [ 00:13:19.109 { 00:13:19.109 "name": "BaseBdev4", 00:13:19.109 "aliases": [ 00:13:19.109 "99a481b7-1a07-46ad-9009-01ef7b973789" 00:13:19.109 ], 00:13:19.109 "product_name": "Malloc disk", 00:13:19.109 "block_size": 512, 00:13:19.109 "num_blocks": 65536, 00:13:19.109 "uuid": "99a481b7-1a07-46ad-9009-01ef7b973789", 00:13:19.109 "assigned_rate_limits": { 00:13:19.109 "rw_ios_per_sec": 0, 00:13:19.109 "rw_mbytes_per_sec": 0, 00:13:19.109 "r_mbytes_per_sec": 0, 00:13:19.109 "w_mbytes_per_sec": 0 00:13:19.109 }, 00:13:19.109 "claimed": true, 00:13:19.109 "claim_type": "exclusive_write", 00:13:19.109 "zoned": false, 00:13:19.109 "supported_io_types": { 00:13:19.109 "read": true, 00:13:19.109 "write": true, 00:13:19.109 "unmap": true, 00:13:19.109 "flush": true, 00:13:19.109 "reset": true, 00:13:19.109 "nvme_admin": false, 00:13:19.109 "nvme_io": false, 00:13:19.109 "nvme_io_md": false, 00:13:19.109 "write_zeroes": true, 00:13:19.109 "zcopy": true, 00:13:19.109 "get_zone_info": false, 00:13:19.109 "zone_management": false, 00:13:19.109 "zone_append": false, 00:13:19.109 "compare": false, 00:13:19.109 "compare_and_write": false, 00:13:19.109 "abort": true, 00:13:19.109 "seek_hole": false, 00:13:19.109 "seek_data": false, 00:13:19.109 "copy": true, 00:13:19.109 "nvme_iov_md": false 00:13:19.109 }, 00:13:19.109 "memory_domains": [ 00:13:19.109 { 00:13:19.109 "dma_device_id": "system", 00:13:19.109 "dma_device_type": 1 00:13:19.109 }, 00:13:19.109 { 00:13:19.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.109 "dma_device_type": 2 00:13:19.109 } 00:13:19.109 ], 00:13:19.109 "driver_specific": {} 00:13:19.109 } 00:13:19.109 ] 00:13:19.109 18:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.109 18:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:19.109 18:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:19.109 18:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:19.109 18:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:19.109 18:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:19.109 18:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:19.109 18:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:19.109 18:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:19.109 18:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:19.109 18:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.109 18:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.109 18:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.109 18:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.109 18:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.109 18:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:19.109 18:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.109 18:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.109 18:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.109 18:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.109 "name": "Existed_Raid", 00:13:19.109 "uuid": "82e60f96-268c-42a1-a334-db4673b66e37", 00:13:19.109 "strip_size_kb": 64, 00:13:19.109 "state": "online", 00:13:19.109 "raid_level": "raid0", 00:13:19.109 "superblock": false, 00:13:19.109 "num_base_bdevs": 4, 00:13:19.109 "num_base_bdevs_discovered": 4, 00:13:19.109 "num_base_bdevs_operational": 4, 00:13:19.109 "base_bdevs_list": [ 00:13:19.109 { 00:13:19.109 "name": "BaseBdev1", 00:13:19.109 "uuid": "f9768a3b-20d1-4020-ba31-433136e13179", 00:13:19.109 "is_configured": true, 00:13:19.109 "data_offset": 0, 00:13:19.109 "data_size": 65536 00:13:19.109 }, 00:13:19.109 { 00:13:19.109 "name": "BaseBdev2", 00:13:19.109 "uuid": "8808a9c7-b780-462a-85c5-d9a218f8ba49", 00:13:19.109 "is_configured": true, 00:13:19.109 "data_offset": 0, 00:13:19.109 "data_size": 65536 00:13:19.109 }, 00:13:19.109 { 00:13:19.109 "name": "BaseBdev3", 00:13:19.109 "uuid": "1c94a8fb-f943-4e68-ae92-651ca61b882d", 00:13:19.109 "is_configured": true, 00:13:19.109 "data_offset": 0, 00:13:19.109 "data_size": 65536 00:13:19.109 }, 00:13:19.109 { 00:13:19.109 "name": "BaseBdev4", 00:13:19.109 "uuid": "99a481b7-1a07-46ad-9009-01ef7b973789", 00:13:19.109 "is_configured": true, 00:13:19.109 "data_offset": 0, 00:13:19.109 "data_size": 65536 00:13:19.109 } 00:13:19.109 ] 00:13:19.109 }' 00:13:19.109 18:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.109 18:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.677 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:19.677 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:19.677 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:19.677 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:19.677 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:19.677 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:19.677 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:19.677 18:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.677 18:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.677 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:19.677 [2024-12-06 18:11:45.112332] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:19.677 18:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.677 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:19.677 "name": "Existed_Raid", 00:13:19.677 "aliases": [ 00:13:19.677 "82e60f96-268c-42a1-a334-db4673b66e37" 00:13:19.677 ], 00:13:19.677 "product_name": "Raid Volume", 00:13:19.677 "block_size": 512, 00:13:19.677 "num_blocks": 262144, 00:13:19.677 "uuid": "82e60f96-268c-42a1-a334-db4673b66e37", 00:13:19.677 "assigned_rate_limits": { 00:13:19.677 "rw_ios_per_sec": 0, 00:13:19.677 "rw_mbytes_per_sec": 0, 00:13:19.677 "r_mbytes_per_sec": 0, 00:13:19.677 "w_mbytes_per_sec": 0 00:13:19.677 }, 00:13:19.677 "claimed": false, 00:13:19.677 "zoned": false, 00:13:19.677 "supported_io_types": { 00:13:19.677 "read": true, 00:13:19.677 "write": true, 00:13:19.677 "unmap": true, 00:13:19.677 "flush": true, 00:13:19.677 "reset": true, 00:13:19.677 "nvme_admin": false, 00:13:19.677 "nvme_io": false, 00:13:19.677 "nvme_io_md": false, 00:13:19.677 "write_zeroes": true, 00:13:19.677 "zcopy": false, 00:13:19.677 "get_zone_info": false, 00:13:19.677 "zone_management": false, 00:13:19.677 "zone_append": false, 00:13:19.677 "compare": false, 00:13:19.677 "compare_and_write": false, 00:13:19.677 "abort": false, 00:13:19.677 "seek_hole": false, 00:13:19.677 "seek_data": false, 00:13:19.677 "copy": false, 00:13:19.677 "nvme_iov_md": false 00:13:19.677 }, 00:13:19.677 "memory_domains": [ 00:13:19.677 { 00:13:19.677 "dma_device_id": "system", 00:13:19.677 "dma_device_type": 1 00:13:19.677 }, 00:13:19.677 { 00:13:19.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.677 "dma_device_type": 2 00:13:19.677 }, 00:13:19.677 { 00:13:19.677 "dma_device_id": "system", 00:13:19.677 "dma_device_type": 1 00:13:19.677 }, 00:13:19.677 { 00:13:19.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.677 "dma_device_type": 2 00:13:19.677 }, 00:13:19.677 { 00:13:19.677 "dma_device_id": "system", 00:13:19.677 "dma_device_type": 1 00:13:19.677 }, 00:13:19.677 { 00:13:19.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.677 "dma_device_type": 2 00:13:19.677 }, 00:13:19.677 { 00:13:19.677 "dma_device_id": "system", 00:13:19.677 "dma_device_type": 1 00:13:19.677 }, 00:13:19.677 { 00:13:19.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.677 "dma_device_type": 2 00:13:19.677 } 00:13:19.677 ], 00:13:19.677 "driver_specific": { 00:13:19.677 "raid": { 00:13:19.677 "uuid": "82e60f96-268c-42a1-a334-db4673b66e37", 00:13:19.677 "strip_size_kb": 64, 00:13:19.677 "state": "online", 00:13:19.677 "raid_level": "raid0", 00:13:19.677 "superblock": false, 00:13:19.677 "num_base_bdevs": 4, 00:13:19.677 "num_base_bdevs_discovered": 4, 00:13:19.677 "num_base_bdevs_operational": 4, 00:13:19.677 "base_bdevs_list": [ 00:13:19.677 { 00:13:19.677 "name": "BaseBdev1", 00:13:19.677 "uuid": "f9768a3b-20d1-4020-ba31-433136e13179", 00:13:19.677 "is_configured": true, 00:13:19.677 "data_offset": 0, 00:13:19.677 "data_size": 65536 00:13:19.677 }, 00:13:19.677 { 00:13:19.677 "name": "BaseBdev2", 00:13:19.677 "uuid": "8808a9c7-b780-462a-85c5-d9a218f8ba49", 00:13:19.677 "is_configured": true, 00:13:19.677 "data_offset": 0, 00:13:19.677 "data_size": 65536 00:13:19.677 }, 00:13:19.677 { 00:13:19.677 "name": "BaseBdev3", 00:13:19.677 "uuid": "1c94a8fb-f943-4e68-ae92-651ca61b882d", 00:13:19.677 "is_configured": true, 00:13:19.677 "data_offset": 0, 00:13:19.677 "data_size": 65536 00:13:19.677 }, 00:13:19.677 { 00:13:19.677 "name": "BaseBdev4", 00:13:19.677 "uuid": "99a481b7-1a07-46ad-9009-01ef7b973789", 00:13:19.677 "is_configured": true, 00:13:19.677 "data_offset": 0, 00:13:19.677 "data_size": 65536 00:13:19.677 } 00:13:19.677 ] 00:13:19.677 } 00:13:19.677 } 00:13:19.677 }' 00:13:19.677 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:19.934 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:19.934 BaseBdev2 00:13:19.934 BaseBdev3 00:13:19.934 BaseBdev4' 00:13:19.934 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:19.934 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:19.934 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:19.934 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:19.934 18:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.934 18:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.934 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:19.934 18:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.934 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:19.934 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:19.934 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:19.934 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:19.934 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:19.934 18:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.934 18:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.934 18:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.934 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:19.934 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:19.935 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:19.935 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:19.935 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:19.935 18:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.935 18:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.935 18:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.935 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:19.935 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:19.935 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:19.935 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:19.935 18:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.935 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:19.935 18:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.935 18:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.193 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:20.193 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:20.193 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:20.193 18:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.193 18:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.193 [2024-12-06 18:11:45.484077] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:20.193 [2024-12-06 18:11:45.484482] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:20.193 [2024-12-06 18:11:45.484573] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:20.193 18:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.193 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:20.193 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:13:20.193 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:20.193 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:20.193 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:20.193 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:13:20.193 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:20.193 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:20.193 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:20.193 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:20.193 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:20.193 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.193 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.193 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.193 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.193 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.193 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:20.193 18:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.193 18:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.193 18:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.193 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.193 "name": "Existed_Raid", 00:13:20.193 "uuid": "82e60f96-268c-42a1-a334-db4673b66e37", 00:13:20.193 "strip_size_kb": 64, 00:13:20.193 "state": "offline", 00:13:20.193 "raid_level": "raid0", 00:13:20.193 "superblock": false, 00:13:20.193 "num_base_bdevs": 4, 00:13:20.193 "num_base_bdevs_discovered": 3, 00:13:20.193 "num_base_bdevs_operational": 3, 00:13:20.193 "base_bdevs_list": [ 00:13:20.193 { 00:13:20.193 "name": null, 00:13:20.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.193 "is_configured": false, 00:13:20.193 "data_offset": 0, 00:13:20.193 "data_size": 65536 00:13:20.193 }, 00:13:20.193 { 00:13:20.193 "name": "BaseBdev2", 00:13:20.193 "uuid": "8808a9c7-b780-462a-85c5-d9a218f8ba49", 00:13:20.193 "is_configured": true, 00:13:20.193 "data_offset": 0, 00:13:20.193 "data_size": 65536 00:13:20.193 }, 00:13:20.193 { 00:13:20.193 "name": "BaseBdev3", 00:13:20.193 "uuid": "1c94a8fb-f943-4e68-ae92-651ca61b882d", 00:13:20.193 "is_configured": true, 00:13:20.194 "data_offset": 0, 00:13:20.194 "data_size": 65536 00:13:20.194 }, 00:13:20.194 { 00:13:20.194 "name": "BaseBdev4", 00:13:20.194 "uuid": "99a481b7-1a07-46ad-9009-01ef7b973789", 00:13:20.194 "is_configured": true, 00:13:20.194 "data_offset": 0, 00:13:20.194 "data_size": 65536 00:13:20.194 } 00:13:20.194 ] 00:13:20.194 }' 00:13:20.194 18:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.194 18:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.759 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:20.759 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:20.759 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.759 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.759 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:20.759 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.759 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.759 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:20.759 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:20.759 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:20.759 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.759 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.759 [2024-12-06 18:11:46.143600] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:20.760 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.760 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:20.760 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:20.760 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.760 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:20.760 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.760 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.760 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.018 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:21.018 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:21.018 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:21.018 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.018 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.018 [2024-12-06 18:11:46.289380] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:21.018 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.018 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:21.018 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:21.018 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.018 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:21.018 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.018 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.018 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.018 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:21.018 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:21.018 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:21.018 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.018 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.018 [2024-12-06 18:11:46.431364] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:21.018 [2024-12-06 18:11:46.431559] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:21.018 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.018 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:21.018 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:21.018 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.018 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.018 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.018 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:21.018 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.278 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:21.278 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:21.278 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:21.278 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:21.278 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:21.278 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:21.278 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.278 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.278 BaseBdev2 00:13:21.278 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.278 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:21.278 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:21.278 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:21.278 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:21.278 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:21.278 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:21.278 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:21.278 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.278 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.278 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.278 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:21.278 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.278 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.278 [ 00:13:21.278 { 00:13:21.278 "name": "BaseBdev2", 00:13:21.278 "aliases": [ 00:13:21.278 "8a56acc4-0178-4244-8a53-eb794d0335bc" 00:13:21.278 ], 00:13:21.278 "product_name": "Malloc disk", 00:13:21.278 "block_size": 512, 00:13:21.278 "num_blocks": 65536, 00:13:21.278 "uuid": "8a56acc4-0178-4244-8a53-eb794d0335bc", 00:13:21.278 "assigned_rate_limits": { 00:13:21.278 "rw_ios_per_sec": 0, 00:13:21.278 "rw_mbytes_per_sec": 0, 00:13:21.278 "r_mbytes_per_sec": 0, 00:13:21.278 "w_mbytes_per_sec": 0 00:13:21.278 }, 00:13:21.278 "claimed": false, 00:13:21.278 "zoned": false, 00:13:21.278 "supported_io_types": { 00:13:21.278 "read": true, 00:13:21.278 "write": true, 00:13:21.278 "unmap": true, 00:13:21.278 "flush": true, 00:13:21.278 "reset": true, 00:13:21.278 "nvme_admin": false, 00:13:21.278 "nvme_io": false, 00:13:21.278 "nvme_io_md": false, 00:13:21.278 "write_zeroes": true, 00:13:21.278 "zcopy": true, 00:13:21.278 "get_zone_info": false, 00:13:21.278 "zone_management": false, 00:13:21.278 "zone_append": false, 00:13:21.278 "compare": false, 00:13:21.278 "compare_and_write": false, 00:13:21.278 "abort": true, 00:13:21.278 "seek_hole": false, 00:13:21.278 "seek_data": false, 00:13:21.278 "copy": true, 00:13:21.278 "nvme_iov_md": false 00:13:21.278 }, 00:13:21.278 "memory_domains": [ 00:13:21.278 { 00:13:21.278 "dma_device_id": "system", 00:13:21.278 "dma_device_type": 1 00:13:21.278 }, 00:13:21.278 { 00:13:21.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.278 "dma_device_type": 2 00:13:21.278 } 00:13:21.278 ], 00:13:21.278 "driver_specific": {} 00:13:21.278 } 00:13:21.278 ] 00:13:21.278 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.278 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:21.278 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:21.278 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:21.278 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:21.278 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.278 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.278 BaseBdev3 00:13:21.278 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.278 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:21.278 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:21.278 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:21.278 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:21.278 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:21.278 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:21.278 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:21.278 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.278 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.278 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.278 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:21.278 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.278 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.278 [ 00:13:21.278 { 00:13:21.278 "name": "BaseBdev3", 00:13:21.278 "aliases": [ 00:13:21.278 "2aa0d20c-8ac1-4c6e-b0a7-e5cbea9371c9" 00:13:21.278 ], 00:13:21.278 "product_name": "Malloc disk", 00:13:21.278 "block_size": 512, 00:13:21.278 "num_blocks": 65536, 00:13:21.278 "uuid": "2aa0d20c-8ac1-4c6e-b0a7-e5cbea9371c9", 00:13:21.278 "assigned_rate_limits": { 00:13:21.278 "rw_ios_per_sec": 0, 00:13:21.278 "rw_mbytes_per_sec": 0, 00:13:21.278 "r_mbytes_per_sec": 0, 00:13:21.278 "w_mbytes_per_sec": 0 00:13:21.278 }, 00:13:21.278 "claimed": false, 00:13:21.278 "zoned": false, 00:13:21.278 "supported_io_types": { 00:13:21.278 "read": true, 00:13:21.278 "write": true, 00:13:21.278 "unmap": true, 00:13:21.278 "flush": true, 00:13:21.278 "reset": true, 00:13:21.278 "nvme_admin": false, 00:13:21.278 "nvme_io": false, 00:13:21.278 "nvme_io_md": false, 00:13:21.278 "write_zeroes": true, 00:13:21.278 "zcopy": true, 00:13:21.278 "get_zone_info": false, 00:13:21.278 "zone_management": false, 00:13:21.278 "zone_append": false, 00:13:21.278 "compare": false, 00:13:21.278 "compare_and_write": false, 00:13:21.278 "abort": true, 00:13:21.278 "seek_hole": false, 00:13:21.278 "seek_data": false, 00:13:21.278 "copy": true, 00:13:21.278 "nvme_iov_md": false 00:13:21.278 }, 00:13:21.278 "memory_domains": [ 00:13:21.278 { 00:13:21.278 "dma_device_id": "system", 00:13:21.278 "dma_device_type": 1 00:13:21.278 }, 00:13:21.278 { 00:13:21.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.278 "dma_device_type": 2 00:13:21.278 } 00:13:21.278 ], 00:13:21.278 "driver_specific": {} 00:13:21.278 } 00:13:21.278 ] 00:13:21.278 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.278 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:21.278 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:21.278 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:21.278 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:21.278 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.279 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.279 BaseBdev4 00:13:21.279 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.279 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:21.279 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:21.279 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:21.279 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:21.279 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:21.279 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:21.279 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:21.279 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.279 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.279 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.279 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:21.279 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.279 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.279 [ 00:13:21.279 { 00:13:21.279 "name": "BaseBdev4", 00:13:21.279 "aliases": [ 00:13:21.279 "0eba375f-276c-4042-afa4-b804741ea97b" 00:13:21.279 ], 00:13:21.279 "product_name": "Malloc disk", 00:13:21.279 "block_size": 512, 00:13:21.279 "num_blocks": 65536, 00:13:21.279 "uuid": "0eba375f-276c-4042-afa4-b804741ea97b", 00:13:21.279 "assigned_rate_limits": { 00:13:21.279 "rw_ios_per_sec": 0, 00:13:21.279 "rw_mbytes_per_sec": 0, 00:13:21.279 "r_mbytes_per_sec": 0, 00:13:21.279 "w_mbytes_per_sec": 0 00:13:21.279 }, 00:13:21.279 "claimed": false, 00:13:21.279 "zoned": false, 00:13:21.279 "supported_io_types": { 00:13:21.279 "read": true, 00:13:21.279 "write": true, 00:13:21.279 "unmap": true, 00:13:21.279 "flush": true, 00:13:21.279 "reset": true, 00:13:21.279 "nvme_admin": false, 00:13:21.279 "nvme_io": false, 00:13:21.279 "nvme_io_md": false, 00:13:21.279 "write_zeroes": true, 00:13:21.279 "zcopy": true, 00:13:21.279 "get_zone_info": false, 00:13:21.279 "zone_management": false, 00:13:21.279 "zone_append": false, 00:13:21.279 "compare": false, 00:13:21.279 "compare_and_write": false, 00:13:21.279 "abort": true, 00:13:21.279 "seek_hole": false, 00:13:21.279 "seek_data": false, 00:13:21.279 "copy": true, 00:13:21.279 "nvme_iov_md": false 00:13:21.279 }, 00:13:21.279 "memory_domains": [ 00:13:21.279 { 00:13:21.279 "dma_device_id": "system", 00:13:21.279 "dma_device_type": 1 00:13:21.279 }, 00:13:21.279 { 00:13:21.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.279 "dma_device_type": 2 00:13:21.279 } 00:13:21.279 ], 00:13:21.279 "driver_specific": {} 00:13:21.279 } 00:13:21.279 ] 00:13:21.279 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.279 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:21.279 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:21.279 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:21.279 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:21.279 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.279 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.279 [2024-12-06 18:11:46.793059] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:21.279 [2024-12-06 18:11:46.793113] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:21.279 [2024-12-06 18:11:46.793146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:21.279 [2024-12-06 18:11:46.795580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:21.279 [2024-12-06 18:11:46.795659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:21.537 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.537 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:21.537 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:21.537 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:21.537 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:21.537 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:21.537 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:21.537 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.537 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.537 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.537 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.537 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.537 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.537 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.537 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.537 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.537 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.537 "name": "Existed_Raid", 00:13:21.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.537 "strip_size_kb": 64, 00:13:21.537 "state": "configuring", 00:13:21.537 "raid_level": "raid0", 00:13:21.537 "superblock": false, 00:13:21.537 "num_base_bdevs": 4, 00:13:21.537 "num_base_bdevs_discovered": 3, 00:13:21.537 "num_base_bdevs_operational": 4, 00:13:21.537 "base_bdevs_list": [ 00:13:21.537 { 00:13:21.537 "name": "BaseBdev1", 00:13:21.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.537 "is_configured": false, 00:13:21.537 "data_offset": 0, 00:13:21.537 "data_size": 0 00:13:21.537 }, 00:13:21.537 { 00:13:21.537 "name": "BaseBdev2", 00:13:21.537 "uuid": "8a56acc4-0178-4244-8a53-eb794d0335bc", 00:13:21.537 "is_configured": true, 00:13:21.537 "data_offset": 0, 00:13:21.537 "data_size": 65536 00:13:21.537 }, 00:13:21.537 { 00:13:21.537 "name": "BaseBdev3", 00:13:21.537 "uuid": "2aa0d20c-8ac1-4c6e-b0a7-e5cbea9371c9", 00:13:21.537 "is_configured": true, 00:13:21.537 "data_offset": 0, 00:13:21.537 "data_size": 65536 00:13:21.537 }, 00:13:21.537 { 00:13:21.537 "name": "BaseBdev4", 00:13:21.537 "uuid": "0eba375f-276c-4042-afa4-b804741ea97b", 00:13:21.537 "is_configured": true, 00:13:21.537 "data_offset": 0, 00:13:21.537 "data_size": 65536 00:13:21.537 } 00:13:21.537 ] 00:13:21.537 }' 00:13:21.537 18:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.537 18:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.795 18:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:21.795 18:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.795 18:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.795 [2024-12-06 18:11:47.313214] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:22.053 18:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.053 18:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:22.053 18:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:22.053 18:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:22.053 18:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:22.053 18:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:22.053 18:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:22.053 18:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.053 18:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.053 18:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.053 18:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.053 18:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.053 18:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.053 18:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.053 18:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.053 18:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.053 18:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.053 "name": "Existed_Raid", 00:13:22.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.053 "strip_size_kb": 64, 00:13:22.053 "state": "configuring", 00:13:22.053 "raid_level": "raid0", 00:13:22.053 "superblock": false, 00:13:22.053 "num_base_bdevs": 4, 00:13:22.053 "num_base_bdevs_discovered": 2, 00:13:22.053 "num_base_bdevs_operational": 4, 00:13:22.053 "base_bdevs_list": [ 00:13:22.053 { 00:13:22.053 "name": "BaseBdev1", 00:13:22.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.053 "is_configured": false, 00:13:22.053 "data_offset": 0, 00:13:22.053 "data_size": 0 00:13:22.053 }, 00:13:22.053 { 00:13:22.053 "name": null, 00:13:22.053 "uuid": "8a56acc4-0178-4244-8a53-eb794d0335bc", 00:13:22.053 "is_configured": false, 00:13:22.053 "data_offset": 0, 00:13:22.053 "data_size": 65536 00:13:22.053 }, 00:13:22.053 { 00:13:22.053 "name": "BaseBdev3", 00:13:22.053 "uuid": "2aa0d20c-8ac1-4c6e-b0a7-e5cbea9371c9", 00:13:22.053 "is_configured": true, 00:13:22.053 "data_offset": 0, 00:13:22.053 "data_size": 65536 00:13:22.053 }, 00:13:22.053 { 00:13:22.053 "name": "BaseBdev4", 00:13:22.053 "uuid": "0eba375f-276c-4042-afa4-b804741ea97b", 00:13:22.053 "is_configured": true, 00:13:22.053 "data_offset": 0, 00:13:22.053 "data_size": 65536 00:13:22.053 } 00:13:22.053 ] 00:13:22.053 }' 00:13:22.053 18:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.053 18:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.637 18:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.637 18:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.637 18:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:22.637 18:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.637 18:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.637 18:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:22.637 18:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:22.637 18:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.637 18:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.637 [2024-12-06 18:11:47.939730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:22.637 BaseBdev1 00:13:22.637 18:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.637 18:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:22.637 18:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:22.637 18:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:22.637 18:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:22.637 18:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:22.637 18:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:22.637 18:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:22.637 18:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.637 18:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.637 18:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.637 18:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:22.637 18:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.637 18:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.637 [ 00:13:22.637 { 00:13:22.637 "name": "BaseBdev1", 00:13:22.637 "aliases": [ 00:13:22.637 "53e380bf-8a51-44f8-a089-38ceb94a1603" 00:13:22.637 ], 00:13:22.637 "product_name": "Malloc disk", 00:13:22.637 "block_size": 512, 00:13:22.637 "num_blocks": 65536, 00:13:22.637 "uuid": "53e380bf-8a51-44f8-a089-38ceb94a1603", 00:13:22.637 "assigned_rate_limits": { 00:13:22.637 "rw_ios_per_sec": 0, 00:13:22.637 "rw_mbytes_per_sec": 0, 00:13:22.637 "r_mbytes_per_sec": 0, 00:13:22.637 "w_mbytes_per_sec": 0 00:13:22.637 }, 00:13:22.637 "claimed": true, 00:13:22.637 "claim_type": "exclusive_write", 00:13:22.637 "zoned": false, 00:13:22.637 "supported_io_types": { 00:13:22.637 "read": true, 00:13:22.637 "write": true, 00:13:22.637 "unmap": true, 00:13:22.637 "flush": true, 00:13:22.637 "reset": true, 00:13:22.637 "nvme_admin": false, 00:13:22.637 "nvme_io": false, 00:13:22.637 "nvme_io_md": false, 00:13:22.637 "write_zeroes": true, 00:13:22.637 "zcopy": true, 00:13:22.637 "get_zone_info": false, 00:13:22.637 "zone_management": false, 00:13:22.637 "zone_append": false, 00:13:22.637 "compare": false, 00:13:22.637 "compare_and_write": false, 00:13:22.637 "abort": true, 00:13:22.637 "seek_hole": false, 00:13:22.637 "seek_data": false, 00:13:22.637 "copy": true, 00:13:22.637 "nvme_iov_md": false 00:13:22.637 }, 00:13:22.637 "memory_domains": [ 00:13:22.637 { 00:13:22.637 "dma_device_id": "system", 00:13:22.637 "dma_device_type": 1 00:13:22.637 }, 00:13:22.637 { 00:13:22.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.637 "dma_device_type": 2 00:13:22.637 } 00:13:22.637 ], 00:13:22.637 "driver_specific": {} 00:13:22.637 } 00:13:22.637 ] 00:13:22.637 18:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.637 18:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:22.637 18:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:22.637 18:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:22.637 18:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:22.637 18:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:22.637 18:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:22.637 18:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:22.637 18:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.637 18:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.637 18:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.637 18:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.637 18:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.637 18:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.637 18:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.637 18:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.637 18:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.637 18:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.637 "name": "Existed_Raid", 00:13:22.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.637 "strip_size_kb": 64, 00:13:22.637 "state": "configuring", 00:13:22.637 "raid_level": "raid0", 00:13:22.637 "superblock": false, 00:13:22.637 "num_base_bdevs": 4, 00:13:22.637 "num_base_bdevs_discovered": 3, 00:13:22.637 "num_base_bdevs_operational": 4, 00:13:22.637 "base_bdevs_list": [ 00:13:22.637 { 00:13:22.637 "name": "BaseBdev1", 00:13:22.637 "uuid": "53e380bf-8a51-44f8-a089-38ceb94a1603", 00:13:22.637 "is_configured": true, 00:13:22.637 "data_offset": 0, 00:13:22.637 "data_size": 65536 00:13:22.637 }, 00:13:22.637 { 00:13:22.637 "name": null, 00:13:22.637 "uuid": "8a56acc4-0178-4244-8a53-eb794d0335bc", 00:13:22.637 "is_configured": false, 00:13:22.637 "data_offset": 0, 00:13:22.637 "data_size": 65536 00:13:22.637 }, 00:13:22.637 { 00:13:22.637 "name": "BaseBdev3", 00:13:22.637 "uuid": "2aa0d20c-8ac1-4c6e-b0a7-e5cbea9371c9", 00:13:22.637 "is_configured": true, 00:13:22.637 "data_offset": 0, 00:13:22.637 "data_size": 65536 00:13:22.637 }, 00:13:22.637 { 00:13:22.637 "name": "BaseBdev4", 00:13:22.637 "uuid": "0eba375f-276c-4042-afa4-b804741ea97b", 00:13:22.637 "is_configured": true, 00:13:22.637 "data_offset": 0, 00:13:22.637 "data_size": 65536 00:13:22.637 } 00:13:22.637 ] 00:13:22.637 }' 00:13:22.637 18:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.637 18:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.203 18:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:23.203 18:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.203 18:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.203 18:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.203 18:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.203 18:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:23.203 18:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:23.203 18:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.203 18:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.203 [2024-12-06 18:11:48.532028] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:23.203 18:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.203 18:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:23.203 18:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:23.203 18:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:23.203 18:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:23.203 18:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:23.203 18:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:23.203 18:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.203 18:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.203 18:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.203 18:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.203 18:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.203 18:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.203 18:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.203 18:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.203 18:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.203 18:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.203 "name": "Existed_Raid", 00:13:23.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.203 "strip_size_kb": 64, 00:13:23.203 "state": "configuring", 00:13:23.203 "raid_level": "raid0", 00:13:23.203 "superblock": false, 00:13:23.203 "num_base_bdevs": 4, 00:13:23.203 "num_base_bdevs_discovered": 2, 00:13:23.203 "num_base_bdevs_operational": 4, 00:13:23.203 "base_bdevs_list": [ 00:13:23.203 { 00:13:23.203 "name": "BaseBdev1", 00:13:23.203 "uuid": "53e380bf-8a51-44f8-a089-38ceb94a1603", 00:13:23.203 "is_configured": true, 00:13:23.203 "data_offset": 0, 00:13:23.203 "data_size": 65536 00:13:23.203 }, 00:13:23.203 { 00:13:23.203 "name": null, 00:13:23.203 "uuid": "8a56acc4-0178-4244-8a53-eb794d0335bc", 00:13:23.203 "is_configured": false, 00:13:23.203 "data_offset": 0, 00:13:23.203 "data_size": 65536 00:13:23.203 }, 00:13:23.203 { 00:13:23.203 "name": null, 00:13:23.203 "uuid": "2aa0d20c-8ac1-4c6e-b0a7-e5cbea9371c9", 00:13:23.203 "is_configured": false, 00:13:23.203 "data_offset": 0, 00:13:23.203 "data_size": 65536 00:13:23.203 }, 00:13:23.203 { 00:13:23.203 "name": "BaseBdev4", 00:13:23.203 "uuid": "0eba375f-276c-4042-afa4-b804741ea97b", 00:13:23.203 "is_configured": true, 00:13:23.203 "data_offset": 0, 00:13:23.203 "data_size": 65536 00:13:23.203 } 00:13:23.203 ] 00:13:23.203 }' 00:13:23.203 18:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.203 18:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.768 18:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:23.768 18:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.768 18:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.768 18:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.768 18:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.768 18:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:23.768 18:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:23.768 18:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.768 18:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.768 [2024-12-06 18:11:49.100194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:23.768 18:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.768 18:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:23.768 18:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:23.768 18:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:23.768 18:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:23.769 18:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:23.769 18:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:23.769 18:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.769 18:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.769 18:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.769 18:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.769 18:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.769 18:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.769 18:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.769 18:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.769 18:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.769 18:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.769 "name": "Existed_Raid", 00:13:23.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.769 "strip_size_kb": 64, 00:13:23.769 "state": "configuring", 00:13:23.769 "raid_level": "raid0", 00:13:23.769 "superblock": false, 00:13:23.769 "num_base_bdevs": 4, 00:13:23.769 "num_base_bdevs_discovered": 3, 00:13:23.769 "num_base_bdevs_operational": 4, 00:13:23.769 "base_bdevs_list": [ 00:13:23.769 { 00:13:23.769 "name": "BaseBdev1", 00:13:23.769 "uuid": "53e380bf-8a51-44f8-a089-38ceb94a1603", 00:13:23.769 "is_configured": true, 00:13:23.769 "data_offset": 0, 00:13:23.769 "data_size": 65536 00:13:23.769 }, 00:13:23.769 { 00:13:23.769 "name": null, 00:13:23.769 "uuid": "8a56acc4-0178-4244-8a53-eb794d0335bc", 00:13:23.769 "is_configured": false, 00:13:23.769 "data_offset": 0, 00:13:23.769 "data_size": 65536 00:13:23.769 }, 00:13:23.769 { 00:13:23.769 "name": "BaseBdev3", 00:13:23.769 "uuid": "2aa0d20c-8ac1-4c6e-b0a7-e5cbea9371c9", 00:13:23.769 "is_configured": true, 00:13:23.769 "data_offset": 0, 00:13:23.769 "data_size": 65536 00:13:23.769 }, 00:13:23.769 { 00:13:23.769 "name": "BaseBdev4", 00:13:23.769 "uuid": "0eba375f-276c-4042-afa4-b804741ea97b", 00:13:23.769 "is_configured": true, 00:13:23.769 "data_offset": 0, 00:13:23.769 "data_size": 65536 00:13:23.769 } 00:13:23.769 ] 00:13:23.769 }' 00:13:23.769 18:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.769 18:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.353 18:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.353 18:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.353 18:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.353 18:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:24.353 18:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.353 18:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:24.353 18:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:24.353 18:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.353 18:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.353 [2024-12-06 18:11:49.660419] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:24.353 18:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.353 18:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:24.353 18:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:24.353 18:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:24.353 18:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:24.353 18:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:24.353 18:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:24.353 18:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.353 18:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.353 18:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.353 18:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.353 18:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.353 18:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.353 18:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.353 18:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:24.353 18:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.353 18:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.353 "name": "Existed_Raid", 00:13:24.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.353 "strip_size_kb": 64, 00:13:24.353 "state": "configuring", 00:13:24.353 "raid_level": "raid0", 00:13:24.353 "superblock": false, 00:13:24.353 "num_base_bdevs": 4, 00:13:24.353 "num_base_bdevs_discovered": 2, 00:13:24.353 "num_base_bdevs_operational": 4, 00:13:24.353 "base_bdevs_list": [ 00:13:24.353 { 00:13:24.353 "name": null, 00:13:24.353 "uuid": "53e380bf-8a51-44f8-a089-38ceb94a1603", 00:13:24.353 "is_configured": false, 00:13:24.353 "data_offset": 0, 00:13:24.353 "data_size": 65536 00:13:24.353 }, 00:13:24.353 { 00:13:24.353 "name": null, 00:13:24.353 "uuid": "8a56acc4-0178-4244-8a53-eb794d0335bc", 00:13:24.353 "is_configured": false, 00:13:24.353 "data_offset": 0, 00:13:24.353 "data_size": 65536 00:13:24.353 }, 00:13:24.353 { 00:13:24.353 "name": "BaseBdev3", 00:13:24.353 "uuid": "2aa0d20c-8ac1-4c6e-b0a7-e5cbea9371c9", 00:13:24.353 "is_configured": true, 00:13:24.353 "data_offset": 0, 00:13:24.353 "data_size": 65536 00:13:24.353 }, 00:13:24.353 { 00:13:24.353 "name": "BaseBdev4", 00:13:24.353 "uuid": "0eba375f-276c-4042-afa4-b804741ea97b", 00:13:24.353 "is_configured": true, 00:13:24.353 "data_offset": 0, 00:13:24.353 "data_size": 65536 00:13:24.353 } 00:13:24.353 ] 00:13:24.353 }' 00:13:24.353 18:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.353 18:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.920 18:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:24.920 18:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.920 18:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.920 18:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.920 18:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.920 18:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:24.920 18:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:24.920 18:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.920 18:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.920 [2024-12-06 18:11:50.315421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:24.920 18:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.920 18:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:24.920 18:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:24.920 18:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:24.920 18:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:24.920 18:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:24.920 18:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:24.920 18:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.920 18:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.920 18:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.920 18:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.920 18:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.920 18:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.920 18:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:24.920 18:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.920 18:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.920 18:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.920 "name": "Existed_Raid", 00:13:24.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.920 "strip_size_kb": 64, 00:13:24.920 "state": "configuring", 00:13:24.920 "raid_level": "raid0", 00:13:24.920 "superblock": false, 00:13:24.920 "num_base_bdevs": 4, 00:13:24.920 "num_base_bdevs_discovered": 3, 00:13:24.920 "num_base_bdevs_operational": 4, 00:13:24.920 "base_bdevs_list": [ 00:13:24.920 { 00:13:24.920 "name": null, 00:13:24.920 "uuid": "53e380bf-8a51-44f8-a089-38ceb94a1603", 00:13:24.920 "is_configured": false, 00:13:24.920 "data_offset": 0, 00:13:24.920 "data_size": 65536 00:13:24.920 }, 00:13:24.920 { 00:13:24.920 "name": "BaseBdev2", 00:13:24.920 "uuid": "8a56acc4-0178-4244-8a53-eb794d0335bc", 00:13:24.920 "is_configured": true, 00:13:24.920 "data_offset": 0, 00:13:24.920 "data_size": 65536 00:13:24.920 }, 00:13:24.920 { 00:13:24.920 "name": "BaseBdev3", 00:13:24.920 "uuid": "2aa0d20c-8ac1-4c6e-b0a7-e5cbea9371c9", 00:13:24.920 "is_configured": true, 00:13:24.920 "data_offset": 0, 00:13:24.920 "data_size": 65536 00:13:24.920 }, 00:13:24.920 { 00:13:24.920 "name": "BaseBdev4", 00:13:24.920 "uuid": "0eba375f-276c-4042-afa4-b804741ea97b", 00:13:24.920 "is_configured": true, 00:13:24.920 "data_offset": 0, 00:13:24.920 "data_size": 65536 00:13:24.920 } 00:13:24.920 ] 00:13:24.920 }' 00:13:24.920 18:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.920 18:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.488 18:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.489 18:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:25.489 18:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.489 18:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.489 18:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.489 18:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:25.489 18:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.489 18:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:25.489 18:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.489 18:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.489 18:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.489 18:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 53e380bf-8a51-44f8-a089-38ceb94a1603 00:13:25.489 18:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.489 18:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.748 [2024-12-06 18:11:51.019633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:25.748 [2024-12-06 18:11:51.019716] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:25.748 [2024-12-06 18:11:51.019728] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:13:25.749 [2024-12-06 18:11:51.020090] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:25.749 [2024-12-06 18:11:51.020293] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:25.749 [2024-12-06 18:11:51.020323] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:25.749 [2024-12-06 18:11:51.020593] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:25.749 NewBaseBdev 00:13:25.749 18:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.749 18:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:25.749 18:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:25.749 18:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:25.749 18:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:25.749 18:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:25.749 18:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:25.749 18:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:25.749 18:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.749 18:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.749 18:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.749 18:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:25.749 18:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.749 18:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.749 [ 00:13:25.749 { 00:13:25.749 "name": "NewBaseBdev", 00:13:25.749 "aliases": [ 00:13:25.749 "53e380bf-8a51-44f8-a089-38ceb94a1603" 00:13:25.749 ], 00:13:25.749 "product_name": "Malloc disk", 00:13:25.749 "block_size": 512, 00:13:25.749 "num_blocks": 65536, 00:13:25.749 "uuid": "53e380bf-8a51-44f8-a089-38ceb94a1603", 00:13:25.749 "assigned_rate_limits": { 00:13:25.749 "rw_ios_per_sec": 0, 00:13:25.749 "rw_mbytes_per_sec": 0, 00:13:25.749 "r_mbytes_per_sec": 0, 00:13:25.749 "w_mbytes_per_sec": 0 00:13:25.749 }, 00:13:25.749 "claimed": true, 00:13:25.749 "claim_type": "exclusive_write", 00:13:25.749 "zoned": false, 00:13:25.749 "supported_io_types": { 00:13:25.749 "read": true, 00:13:25.749 "write": true, 00:13:25.749 "unmap": true, 00:13:25.749 "flush": true, 00:13:25.749 "reset": true, 00:13:25.749 "nvme_admin": false, 00:13:25.749 "nvme_io": false, 00:13:25.749 "nvme_io_md": false, 00:13:25.749 "write_zeroes": true, 00:13:25.749 "zcopy": true, 00:13:25.749 "get_zone_info": false, 00:13:25.749 "zone_management": false, 00:13:25.749 "zone_append": false, 00:13:25.749 "compare": false, 00:13:25.749 "compare_and_write": false, 00:13:25.749 "abort": true, 00:13:25.749 "seek_hole": false, 00:13:25.749 "seek_data": false, 00:13:25.749 "copy": true, 00:13:25.749 "nvme_iov_md": false 00:13:25.749 }, 00:13:25.749 "memory_domains": [ 00:13:25.749 { 00:13:25.749 "dma_device_id": "system", 00:13:25.749 "dma_device_type": 1 00:13:25.749 }, 00:13:25.749 { 00:13:25.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.749 "dma_device_type": 2 00:13:25.749 } 00:13:25.749 ], 00:13:25.749 "driver_specific": {} 00:13:25.749 } 00:13:25.749 ] 00:13:25.749 18:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.749 18:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:25.749 18:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:25.749 18:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:25.749 18:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:25.749 18:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:25.749 18:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:25.749 18:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:25.749 18:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.749 18:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.749 18:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.749 18:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.749 18:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.749 18:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:25.749 18:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.749 18:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.749 18:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.749 18:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.749 "name": "Existed_Raid", 00:13:25.749 "uuid": "ec906ade-6f65-4afd-9eb9-9da61e841101", 00:13:25.749 "strip_size_kb": 64, 00:13:25.749 "state": "online", 00:13:25.749 "raid_level": "raid0", 00:13:25.749 "superblock": false, 00:13:25.749 "num_base_bdevs": 4, 00:13:25.749 "num_base_bdevs_discovered": 4, 00:13:25.749 "num_base_bdevs_operational": 4, 00:13:25.749 "base_bdevs_list": [ 00:13:25.749 { 00:13:25.749 "name": "NewBaseBdev", 00:13:25.749 "uuid": "53e380bf-8a51-44f8-a089-38ceb94a1603", 00:13:25.749 "is_configured": true, 00:13:25.749 "data_offset": 0, 00:13:25.749 "data_size": 65536 00:13:25.749 }, 00:13:25.749 { 00:13:25.749 "name": "BaseBdev2", 00:13:25.749 "uuid": "8a56acc4-0178-4244-8a53-eb794d0335bc", 00:13:25.749 "is_configured": true, 00:13:25.749 "data_offset": 0, 00:13:25.749 "data_size": 65536 00:13:25.749 }, 00:13:25.749 { 00:13:25.749 "name": "BaseBdev3", 00:13:25.749 "uuid": "2aa0d20c-8ac1-4c6e-b0a7-e5cbea9371c9", 00:13:25.749 "is_configured": true, 00:13:25.749 "data_offset": 0, 00:13:25.749 "data_size": 65536 00:13:25.749 }, 00:13:25.749 { 00:13:25.749 "name": "BaseBdev4", 00:13:25.749 "uuid": "0eba375f-276c-4042-afa4-b804741ea97b", 00:13:25.749 "is_configured": true, 00:13:25.749 "data_offset": 0, 00:13:25.749 "data_size": 65536 00:13:25.749 } 00:13:25.749 ] 00:13:25.749 }' 00:13:25.749 18:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.749 18:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.317 18:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:26.317 18:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:26.317 18:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:26.317 18:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:26.317 18:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:26.317 18:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:26.317 18:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:26.317 18:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:26.317 18:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.317 18:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.317 [2024-12-06 18:11:51.568422] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:26.317 18:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.317 18:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:26.317 "name": "Existed_Raid", 00:13:26.317 "aliases": [ 00:13:26.317 "ec906ade-6f65-4afd-9eb9-9da61e841101" 00:13:26.317 ], 00:13:26.317 "product_name": "Raid Volume", 00:13:26.317 "block_size": 512, 00:13:26.317 "num_blocks": 262144, 00:13:26.317 "uuid": "ec906ade-6f65-4afd-9eb9-9da61e841101", 00:13:26.317 "assigned_rate_limits": { 00:13:26.317 "rw_ios_per_sec": 0, 00:13:26.317 "rw_mbytes_per_sec": 0, 00:13:26.317 "r_mbytes_per_sec": 0, 00:13:26.317 "w_mbytes_per_sec": 0 00:13:26.317 }, 00:13:26.317 "claimed": false, 00:13:26.317 "zoned": false, 00:13:26.317 "supported_io_types": { 00:13:26.317 "read": true, 00:13:26.317 "write": true, 00:13:26.317 "unmap": true, 00:13:26.317 "flush": true, 00:13:26.317 "reset": true, 00:13:26.317 "nvme_admin": false, 00:13:26.317 "nvme_io": false, 00:13:26.317 "nvme_io_md": false, 00:13:26.317 "write_zeroes": true, 00:13:26.317 "zcopy": false, 00:13:26.317 "get_zone_info": false, 00:13:26.317 "zone_management": false, 00:13:26.317 "zone_append": false, 00:13:26.317 "compare": false, 00:13:26.317 "compare_and_write": false, 00:13:26.317 "abort": false, 00:13:26.317 "seek_hole": false, 00:13:26.317 "seek_data": false, 00:13:26.317 "copy": false, 00:13:26.317 "nvme_iov_md": false 00:13:26.317 }, 00:13:26.317 "memory_domains": [ 00:13:26.317 { 00:13:26.317 "dma_device_id": "system", 00:13:26.317 "dma_device_type": 1 00:13:26.317 }, 00:13:26.317 { 00:13:26.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.317 "dma_device_type": 2 00:13:26.317 }, 00:13:26.317 { 00:13:26.317 "dma_device_id": "system", 00:13:26.317 "dma_device_type": 1 00:13:26.317 }, 00:13:26.317 { 00:13:26.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.317 "dma_device_type": 2 00:13:26.317 }, 00:13:26.317 { 00:13:26.317 "dma_device_id": "system", 00:13:26.317 "dma_device_type": 1 00:13:26.317 }, 00:13:26.317 { 00:13:26.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.317 "dma_device_type": 2 00:13:26.317 }, 00:13:26.317 { 00:13:26.317 "dma_device_id": "system", 00:13:26.317 "dma_device_type": 1 00:13:26.317 }, 00:13:26.317 { 00:13:26.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.317 "dma_device_type": 2 00:13:26.317 } 00:13:26.317 ], 00:13:26.317 "driver_specific": { 00:13:26.317 "raid": { 00:13:26.317 "uuid": "ec906ade-6f65-4afd-9eb9-9da61e841101", 00:13:26.317 "strip_size_kb": 64, 00:13:26.317 "state": "online", 00:13:26.317 "raid_level": "raid0", 00:13:26.317 "superblock": false, 00:13:26.317 "num_base_bdevs": 4, 00:13:26.317 "num_base_bdevs_discovered": 4, 00:13:26.317 "num_base_bdevs_operational": 4, 00:13:26.317 "base_bdevs_list": [ 00:13:26.317 { 00:13:26.317 "name": "NewBaseBdev", 00:13:26.317 "uuid": "53e380bf-8a51-44f8-a089-38ceb94a1603", 00:13:26.317 "is_configured": true, 00:13:26.317 "data_offset": 0, 00:13:26.317 "data_size": 65536 00:13:26.317 }, 00:13:26.317 { 00:13:26.317 "name": "BaseBdev2", 00:13:26.317 "uuid": "8a56acc4-0178-4244-8a53-eb794d0335bc", 00:13:26.317 "is_configured": true, 00:13:26.317 "data_offset": 0, 00:13:26.317 "data_size": 65536 00:13:26.317 }, 00:13:26.317 { 00:13:26.317 "name": "BaseBdev3", 00:13:26.317 "uuid": "2aa0d20c-8ac1-4c6e-b0a7-e5cbea9371c9", 00:13:26.317 "is_configured": true, 00:13:26.317 "data_offset": 0, 00:13:26.317 "data_size": 65536 00:13:26.317 }, 00:13:26.317 { 00:13:26.317 "name": "BaseBdev4", 00:13:26.317 "uuid": "0eba375f-276c-4042-afa4-b804741ea97b", 00:13:26.317 "is_configured": true, 00:13:26.317 "data_offset": 0, 00:13:26.317 "data_size": 65536 00:13:26.317 } 00:13:26.317 ] 00:13:26.317 } 00:13:26.317 } 00:13:26.317 }' 00:13:26.317 18:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:26.317 18:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:26.317 BaseBdev2 00:13:26.317 BaseBdev3 00:13:26.317 BaseBdev4' 00:13:26.317 18:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:26.317 18:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:26.317 18:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:26.317 18:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:26.317 18:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.317 18:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.317 18:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:26.317 18:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.317 18:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:26.317 18:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:26.317 18:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:26.317 18:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:26.317 18:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.317 18:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.317 18:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:26.317 18:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.317 18:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:26.317 18:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:26.317 18:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:26.317 18:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:26.317 18:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:26.317 18:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.317 18:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.576 18:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.576 18:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:26.576 18:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:26.576 18:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:26.576 18:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:26.576 18:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.576 18:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:26.576 18:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.576 18:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.576 18:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:26.576 18:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:26.576 18:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:26.576 18:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.576 18:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.576 [2024-12-06 18:11:51.936014] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:26.576 [2024-12-06 18:11:51.936054] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:26.576 [2024-12-06 18:11:51.936169] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:26.576 [2024-12-06 18:11:51.936332] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:26.576 [2024-12-06 18:11:51.936348] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:26.576 18:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.576 18:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69518 00:13:26.576 18:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69518 ']' 00:13:26.576 18:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69518 00:13:26.576 18:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:13:26.576 18:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:26.576 18:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69518 00:13:26.576 killing process with pid 69518 00:13:26.576 18:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:26.576 18:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:26.576 18:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69518' 00:13:26.576 18:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69518 00:13:26.576 [2024-12-06 18:11:51.972770] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:26.576 18:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69518 00:13:26.834 [2024-12-06 18:11:52.297463] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:28.248 ************************************ 00:13:28.248 END TEST raid_state_function_test 00:13:28.248 ************************************ 00:13:28.248 18:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:28.248 00:13:28.248 real 0m12.807s 00:13:28.248 user 0m21.321s 00:13:28.248 sys 0m1.781s 00:13:28.248 18:11:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:28.248 18:11:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.248 18:11:53 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:13:28.248 18:11:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:28.248 18:11:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:28.248 18:11:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:28.248 ************************************ 00:13:28.248 START TEST raid_state_function_test_sb 00:13:28.248 ************************************ 00:13:28.248 18:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:13:28.248 18:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:13:28.248 18:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:28.248 18:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:28.248 18:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:28.248 18:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:28.248 18:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:28.248 18:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:28.248 18:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:28.248 18:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:28.248 18:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:28.248 18:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:28.248 18:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:28.248 18:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:28.248 18:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:28.248 18:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:28.248 18:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:28.248 18:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:28.248 18:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:28.248 18:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:28.248 18:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:28.248 18:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:28.248 18:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:28.248 18:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:28.248 18:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:28.248 18:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:13:28.248 18:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:28.248 18:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:28.248 18:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:28.248 18:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:28.248 18:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70196 00:13:28.248 Process raid pid: 70196 00:13:28.248 18:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70196' 00:13:28.248 18:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70196 00:13:28.248 18:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:28.248 18:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70196 ']' 00:13:28.248 18:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.248 18:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:28.248 18:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.248 18:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:28.248 18:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.248 [2024-12-06 18:11:53.486955] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:13:28.248 [2024-12-06 18:11:53.487105] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:28.248 [2024-12-06 18:11:53.662703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.508 [2024-12-06 18:11:53.796085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.508 [2024-12-06 18:11:54.011487] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:28.508 [2024-12-06 18:11:54.011573] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:29.076 18:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:29.076 18:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:29.076 18:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:29.076 18:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.076 18:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.076 [2024-12-06 18:11:54.528573] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:29.076 [2024-12-06 18:11:54.528641] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:29.076 [2024-12-06 18:11:54.528664] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:29.076 [2024-12-06 18:11:54.528681] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:29.076 [2024-12-06 18:11:54.528691] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:29.076 [2024-12-06 18:11:54.528705] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:29.076 [2024-12-06 18:11:54.528715] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:29.076 [2024-12-06 18:11:54.528729] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:29.076 18:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.076 18:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:29.076 18:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:29.076 18:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:29.076 18:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:29.076 18:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:29.076 18:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:29.076 18:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.076 18:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.076 18:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.076 18:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.076 18:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.076 18:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.076 18:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.076 18:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.076 18:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.076 18:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.076 "name": "Existed_Raid", 00:13:29.076 "uuid": "8ed102d2-568d-4380-87e0-b97fa65f6b3b", 00:13:29.076 "strip_size_kb": 64, 00:13:29.076 "state": "configuring", 00:13:29.076 "raid_level": "raid0", 00:13:29.076 "superblock": true, 00:13:29.076 "num_base_bdevs": 4, 00:13:29.076 "num_base_bdevs_discovered": 0, 00:13:29.076 "num_base_bdevs_operational": 4, 00:13:29.077 "base_bdevs_list": [ 00:13:29.077 { 00:13:29.077 "name": "BaseBdev1", 00:13:29.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.077 "is_configured": false, 00:13:29.077 "data_offset": 0, 00:13:29.077 "data_size": 0 00:13:29.077 }, 00:13:29.077 { 00:13:29.077 "name": "BaseBdev2", 00:13:29.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.077 "is_configured": false, 00:13:29.077 "data_offset": 0, 00:13:29.077 "data_size": 0 00:13:29.077 }, 00:13:29.077 { 00:13:29.077 "name": "BaseBdev3", 00:13:29.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.077 "is_configured": false, 00:13:29.077 "data_offset": 0, 00:13:29.077 "data_size": 0 00:13:29.077 }, 00:13:29.077 { 00:13:29.077 "name": "BaseBdev4", 00:13:29.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.077 "is_configured": false, 00:13:29.077 "data_offset": 0, 00:13:29.077 "data_size": 0 00:13:29.077 } 00:13:29.077 ] 00:13:29.077 }' 00:13:29.077 18:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.077 18:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.645 18:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:29.645 18:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.645 18:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.645 [2024-12-06 18:11:55.068747] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:29.645 [2024-12-06 18:11:55.068811] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:29.645 18:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.645 18:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:29.645 18:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.645 18:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.645 [2024-12-06 18:11:55.076725] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:29.645 [2024-12-06 18:11:55.076825] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:29.645 [2024-12-06 18:11:55.076841] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:29.645 [2024-12-06 18:11:55.076858] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:29.645 [2024-12-06 18:11:55.076868] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:29.645 [2024-12-06 18:11:55.076882] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:29.645 [2024-12-06 18:11:55.076892] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:29.645 [2024-12-06 18:11:55.076915] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:29.645 18:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.645 18:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:29.645 18:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.645 18:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.645 [2024-12-06 18:11:55.121880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:29.645 BaseBdev1 00:13:29.645 18:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.645 18:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:29.645 18:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:29.645 18:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:29.645 18:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:29.645 18:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:29.645 18:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:29.645 18:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:29.645 18:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.645 18:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.645 18:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.645 18:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:29.645 18:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.645 18:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.645 [ 00:13:29.645 { 00:13:29.645 "name": "BaseBdev1", 00:13:29.645 "aliases": [ 00:13:29.645 "25bda460-dcfa-437f-994a-a244076edf8d" 00:13:29.645 ], 00:13:29.645 "product_name": "Malloc disk", 00:13:29.645 "block_size": 512, 00:13:29.645 "num_blocks": 65536, 00:13:29.645 "uuid": "25bda460-dcfa-437f-994a-a244076edf8d", 00:13:29.645 "assigned_rate_limits": { 00:13:29.645 "rw_ios_per_sec": 0, 00:13:29.645 "rw_mbytes_per_sec": 0, 00:13:29.645 "r_mbytes_per_sec": 0, 00:13:29.645 "w_mbytes_per_sec": 0 00:13:29.645 }, 00:13:29.645 "claimed": true, 00:13:29.645 "claim_type": "exclusive_write", 00:13:29.645 "zoned": false, 00:13:29.645 "supported_io_types": { 00:13:29.645 "read": true, 00:13:29.645 "write": true, 00:13:29.645 "unmap": true, 00:13:29.645 "flush": true, 00:13:29.645 "reset": true, 00:13:29.645 "nvme_admin": false, 00:13:29.645 "nvme_io": false, 00:13:29.645 "nvme_io_md": false, 00:13:29.645 "write_zeroes": true, 00:13:29.645 "zcopy": true, 00:13:29.645 "get_zone_info": false, 00:13:29.645 "zone_management": false, 00:13:29.645 "zone_append": false, 00:13:29.645 "compare": false, 00:13:29.645 "compare_and_write": false, 00:13:29.645 "abort": true, 00:13:29.645 "seek_hole": false, 00:13:29.645 "seek_data": false, 00:13:29.645 "copy": true, 00:13:29.645 "nvme_iov_md": false 00:13:29.645 }, 00:13:29.645 "memory_domains": [ 00:13:29.645 { 00:13:29.645 "dma_device_id": "system", 00:13:29.645 "dma_device_type": 1 00:13:29.645 }, 00:13:29.645 { 00:13:29.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.645 "dma_device_type": 2 00:13:29.645 } 00:13:29.645 ], 00:13:29.645 "driver_specific": {} 00:13:29.645 } 00:13:29.645 ] 00:13:29.645 18:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.645 18:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:29.645 18:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:29.645 18:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:29.645 18:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:29.645 18:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:29.645 18:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:29.645 18:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:29.645 18:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.645 18:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.645 18:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.645 18:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.645 18:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.645 18:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.645 18:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.645 18:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.904 18:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.904 18:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.904 "name": "Existed_Raid", 00:13:29.904 "uuid": "08de6276-4365-4334-9b00-aec346354efe", 00:13:29.904 "strip_size_kb": 64, 00:13:29.904 "state": "configuring", 00:13:29.904 "raid_level": "raid0", 00:13:29.904 "superblock": true, 00:13:29.904 "num_base_bdevs": 4, 00:13:29.904 "num_base_bdevs_discovered": 1, 00:13:29.904 "num_base_bdevs_operational": 4, 00:13:29.904 "base_bdevs_list": [ 00:13:29.904 { 00:13:29.904 "name": "BaseBdev1", 00:13:29.904 "uuid": "25bda460-dcfa-437f-994a-a244076edf8d", 00:13:29.904 "is_configured": true, 00:13:29.904 "data_offset": 2048, 00:13:29.904 "data_size": 63488 00:13:29.904 }, 00:13:29.904 { 00:13:29.904 "name": "BaseBdev2", 00:13:29.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.904 "is_configured": false, 00:13:29.904 "data_offset": 0, 00:13:29.904 "data_size": 0 00:13:29.904 }, 00:13:29.904 { 00:13:29.904 "name": "BaseBdev3", 00:13:29.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.904 "is_configured": false, 00:13:29.904 "data_offset": 0, 00:13:29.904 "data_size": 0 00:13:29.904 }, 00:13:29.904 { 00:13:29.904 "name": "BaseBdev4", 00:13:29.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.904 "is_configured": false, 00:13:29.904 "data_offset": 0, 00:13:29.904 "data_size": 0 00:13:29.904 } 00:13:29.904 ] 00:13:29.904 }' 00:13:29.904 18:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.904 18:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.471 18:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:30.471 18:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.471 18:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.471 [2024-12-06 18:11:55.694100] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:30.471 [2024-12-06 18:11:55.694185] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:30.471 18:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.471 18:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:30.471 18:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.471 18:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.471 [2024-12-06 18:11:55.702157] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:30.471 [2024-12-06 18:11:55.704664] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:30.471 [2024-12-06 18:11:55.704733] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:30.471 [2024-12-06 18:11:55.704749] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:30.471 [2024-12-06 18:11:55.704781] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:30.471 [2024-12-06 18:11:55.704794] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:30.471 [2024-12-06 18:11:55.704809] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:30.471 18:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.471 18:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:30.471 18:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:30.471 18:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:30.471 18:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:30.471 18:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:30.471 18:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:30.471 18:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:30.471 18:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:30.471 18:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.471 18:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.471 18:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.471 18:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.471 18:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.471 18:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.471 18:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:30.471 18:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.471 18:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.472 18:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.472 "name": "Existed_Raid", 00:13:30.472 "uuid": "9b53837d-b78f-4f8e-9f58-48ae283b7cc6", 00:13:30.472 "strip_size_kb": 64, 00:13:30.472 "state": "configuring", 00:13:30.472 "raid_level": "raid0", 00:13:30.472 "superblock": true, 00:13:30.472 "num_base_bdevs": 4, 00:13:30.472 "num_base_bdevs_discovered": 1, 00:13:30.472 "num_base_bdevs_operational": 4, 00:13:30.472 "base_bdevs_list": [ 00:13:30.472 { 00:13:30.472 "name": "BaseBdev1", 00:13:30.472 "uuid": "25bda460-dcfa-437f-994a-a244076edf8d", 00:13:30.472 "is_configured": true, 00:13:30.472 "data_offset": 2048, 00:13:30.472 "data_size": 63488 00:13:30.472 }, 00:13:30.472 { 00:13:30.472 "name": "BaseBdev2", 00:13:30.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.472 "is_configured": false, 00:13:30.472 "data_offset": 0, 00:13:30.472 "data_size": 0 00:13:30.472 }, 00:13:30.472 { 00:13:30.472 "name": "BaseBdev3", 00:13:30.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.472 "is_configured": false, 00:13:30.472 "data_offset": 0, 00:13:30.472 "data_size": 0 00:13:30.472 }, 00:13:30.472 { 00:13:30.472 "name": "BaseBdev4", 00:13:30.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.472 "is_configured": false, 00:13:30.472 "data_offset": 0, 00:13:30.472 "data_size": 0 00:13:30.472 } 00:13:30.472 ] 00:13:30.472 }' 00:13:30.472 18:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.472 18:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.042 18:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:31.042 18:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.042 18:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.042 [2024-12-06 18:11:56.301632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:31.042 BaseBdev2 00:13:31.042 18:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.042 18:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:31.042 18:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:31.042 18:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:31.042 18:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:31.042 18:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:31.042 18:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:31.042 18:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:31.042 18:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.042 18:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.042 18:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.042 18:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:31.042 18:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.042 18:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.042 [ 00:13:31.042 { 00:13:31.042 "name": "BaseBdev2", 00:13:31.042 "aliases": [ 00:13:31.042 "1c3bdceb-753d-4300-879d-43ea9ffaad08" 00:13:31.042 ], 00:13:31.042 "product_name": "Malloc disk", 00:13:31.042 "block_size": 512, 00:13:31.042 "num_blocks": 65536, 00:13:31.042 "uuid": "1c3bdceb-753d-4300-879d-43ea9ffaad08", 00:13:31.042 "assigned_rate_limits": { 00:13:31.042 "rw_ios_per_sec": 0, 00:13:31.042 "rw_mbytes_per_sec": 0, 00:13:31.042 "r_mbytes_per_sec": 0, 00:13:31.042 "w_mbytes_per_sec": 0 00:13:31.042 }, 00:13:31.042 "claimed": true, 00:13:31.042 "claim_type": "exclusive_write", 00:13:31.042 "zoned": false, 00:13:31.042 "supported_io_types": { 00:13:31.042 "read": true, 00:13:31.042 "write": true, 00:13:31.042 "unmap": true, 00:13:31.042 "flush": true, 00:13:31.042 "reset": true, 00:13:31.042 "nvme_admin": false, 00:13:31.042 "nvme_io": false, 00:13:31.042 "nvme_io_md": false, 00:13:31.042 "write_zeroes": true, 00:13:31.042 "zcopy": true, 00:13:31.042 "get_zone_info": false, 00:13:31.042 "zone_management": false, 00:13:31.042 "zone_append": false, 00:13:31.042 "compare": false, 00:13:31.042 "compare_and_write": false, 00:13:31.042 "abort": true, 00:13:31.042 "seek_hole": false, 00:13:31.042 "seek_data": false, 00:13:31.042 "copy": true, 00:13:31.042 "nvme_iov_md": false 00:13:31.042 }, 00:13:31.042 "memory_domains": [ 00:13:31.042 { 00:13:31.042 "dma_device_id": "system", 00:13:31.042 "dma_device_type": 1 00:13:31.042 }, 00:13:31.042 { 00:13:31.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.042 "dma_device_type": 2 00:13:31.042 } 00:13:31.042 ], 00:13:31.042 "driver_specific": {} 00:13:31.042 } 00:13:31.042 ] 00:13:31.042 18:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.042 18:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:31.042 18:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:31.042 18:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:31.042 18:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:31.042 18:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:31.042 18:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:31.042 18:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:31.042 18:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:31.042 18:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:31.042 18:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.042 18:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.042 18:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.042 18:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.042 18:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.042 18:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:31.043 18:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.043 18:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.043 18:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.043 18:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.043 "name": "Existed_Raid", 00:13:31.043 "uuid": "9b53837d-b78f-4f8e-9f58-48ae283b7cc6", 00:13:31.043 "strip_size_kb": 64, 00:13:31.043 "state": "configuring", 00:13:31.043 "raid_level": "raid0", 00:13:31.043 "superblock": true, 00:13:31.043 "num_base_bdevs": 4, 00:13:31.043 "num_base_bdevs_discovered": 2, 00:13:31.043 "num_base_bdevs_operational": 4, 00:13:31.043 "base_bdevs_list": [ 00:13:31.043 { 00:13:31.043 "name": "BaseBdev1", 00:13:31.043 "uuid": "25bda460-dcfa-437f-994a-a244076edf8d", 00:13:31.043 "is_configured": true, 00:13:31.043 "data_offset": 2048, 00:13:31.043 "data_size": 63488 00:13:31.043 }, 00:13:31.043 { 00:13:31.043 "name": "BaseBdev2", 00:13:31.043 "uuid": "1c3bdceb-753d-4300-879d-43ea9ffaad08", 00:13:31.043 "is_configured": true, 00:13:31.043 "data_offset": 2048, 00:13:31.043 "data_size": 63488 00:13:31.043 }, 00:13:31.043 { 00:13:31.043 "name": "BaseBdev3", 00:13:31.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.043 "is_configured": false, 00:13:31.043 "data_offset": 0, 00:13:31.043 "data_size": 0 00:13:31.043 }, 00:13:31.043 { 00:13:31.043 "name": "BaseBdev4", 00:13:31.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.043 "is_configured": false, 00:13:31.043 "data_offset": 0, 00:13:31.043 "data_size": 0 00:13:31.043 } 00:13:31.043 ] 00:13:31.043 }' 00:13:31.043 18:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.043 18:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.612 18:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:31.612 18:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.612 18:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.612 [2024-12-06 18:11:56.912282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:31.612 BaseBdev3 00:13:31.612 18:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.612 18:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:31.612 18:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:31.612 18:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:31.612 18:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:31.612 18:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:31.612 18:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:31.612 18:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:31.612 18:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.612 18:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.612 18:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.612 18:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:31.612 18:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.612 18:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.612 [ 00:13:31.612 { 00:13:31.612 "name": "BaseBdev3", 00:13:31.612 "aliases": [ 00:13:31.612 "09281549-c717-4c65-9dfc-4d79bccbb2b9" 00:13:31.612 ], 00:13:31.612 "product_name": "Malloc disk", 00:13:31.612 "block_size": 512, 00:13:31.612 "num_blocks": 65536, 00:13:31.612 "uuid": "09281549-c717-4c65-9dfc-4d79bccbb2b9", 00:13:31.612 "assigned_rate_limits": { 00:13:31.612 "rw_ios_per_sec": 0, 00:13:31.612 "rw_mbytes_per_sec": 0, 00:13:31.612 "r_mbytes_per_sec": 0, 00:13:31.612 "w_mbytes_per_sec": 0 00:13:31.612 }, 00:13:31.612 "claimed": true, 00:13:31.612 "claim_type": "exclusive_write", 00:13:31.612 "zoned": false, 00:13:31.612 "supported_io_types": { 00:13:31.612 "read": true, 00:13:31.612 "write": true, 00:13:31.612 "unmap": true, 00:13:31.612 "flush": true, 00:13:31.612 "reset": true, 00:13:31.612 "nvme_admin": false, 00:13:31.612 "nvme_io": false, 00:13:31.612 "nvme_io_md": false, 00:13:31.612 "write_zeroes": true, 00:13:31.612 "zcopy": true, 00:13:31.612 "get_zone_info": false, 00:13:31.612 "zone_management": false, 00:13:31.612 "zone_append": false, 00:13:31.612 "compare": false, 00:13:31.612 "compare_and_write": false, 00:13:31.612 "abort": true, 00:13:31.612 "seek_hole": false, 00:13:31.612 "seek_data": false, 00:13:31.612 "copy": true, 00:13:31.612 "nvme_iov_md": false 00:13:31.612 }, 00:13:31.612 "memory_domains": [ 00:13:31.612 { 00:13:31.612 "dma_device_id": "system", 00:13:31.612 "dma_device_type": 1 00:13:31.612 }, 00:13:31.612 { 00:13:31.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.613 "dma_device_type": 2 00:13:31.613 } 00:13:31.613 ], 00:13:31.613 "driver_specific": {} 00:13:31.613 } 00:13:31.613 ] 00:13:31.613 18:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.613 18:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:31.613 18:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:31.613 18:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:31.613 18:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:31.613 18:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:31.613 18:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:31.613 18:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:31.613 18:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:31.613 18:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:31.613 18:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.613 18:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.613 18:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.613 18:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.613 18:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.613 18:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:31.613 18:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.613 18:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.613 18:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.613 18:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.613 "name": "Existed_Raid", 00:13:31.613 "uuid": "9b53837d-b78f-4f8e-9f58-48ae283b7cc6", 00:13:31.613 "strip_size_kb": 64, 00:13:31.613 "state": "configuring", 00:13:31.613 "raid_level": "raid0", 00:13:31.613 "superblock": true, 00:13:31.613 "num_base_bdevs": 4, 00:13:31.613 "num_base_bdevs_discovered": 3, 00:13:31.613 "num_base_bdevs_operational": 4, 00:13:31.613 "base_bdevs_list": [ 00:13:31.613 { 00:13:31.613 "name": "BaseBdev1", 00:13:31.613 "uuid": "25bda460-dcfa-437f-994a-a244076edf8d", 00:13:31.613 "is_configured": true, 00:13:31.613 "data_offset": 2048, 00:13:31.613 "data_size": 63488 00:13:31.613 }, 00:13:31.613 { 00:13:31.613 "name": "BaseBdev2", 00:13:31.613 "uuid": "1c3bdceb-753d-4300-879d-43ea9ffaad08", 00:13:31.613 "is_configured": true, 00:13:31.613 "data_offset": 2048, 00:13:31.613 "data_size": 63488 00:13:31.613 }, 00:13:31.613 { 00:13:31.613 "name": "BaseBdev3", 00:13:31.613 "uuid": "09281549-c717-4c65-9dfc-4d79bccbb2b9", 00:13:31.613 "is_configured": true, 00:13:31.613 "data_offset": 2048, 00:13:31.613 "data_size": 63488 00:13:31.613 }, 00:13:31.613 { 00:13:31.613 "name": "BaseBdev4", 00:13:31.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.613 "is_configured": false, 00:13:31.613 "data_offset": 0, 00:13:31.613 "data_size": 0 00:13:31.613 } 00:13:31.613 ] 00:13:31.613 }' 00:13:31.613 18:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.613 18:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.192 18:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:32.192 18:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.192 18:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.192 [2024-12-06 18:11:57.531684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:32.192 [2024-12-06 18:11:57.532076] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:32.192 [2024-12-06 18:11:57.532096] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:32.192 BaseBdev4 00:13:32.192 [2024-12-06 18:11:57.532433] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:32.192 [2024-12-06 18:11:57.532621] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:32.192 [2024-12-06 18:11:57.532640] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:32.192 [2024-12-06 18:11:57.532871] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:32.192 18:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.192 18:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:32.192 18:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:32.192 18:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:32.192 18:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:32.192 18:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:32.192 18:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:32.192 18:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:32.192 18:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.192 18:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.192 18:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.192 18:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:32.192 18:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.192 18:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.192 [ 00:13:32.192 { 00:13:32.192 "name": "BaseBdev4", 00:13:32.192 "aliases": [ 00:13:32.192 "f9b40102-50ce-40dc-bb98-6b3118628239" 00:13:32.192 ], 00:13:32.192 "product_name": "Malloc disk", 00:13:32.192 "block_size": 512, 00:13:32.192 "num_blocks": 65536, 00:13:32.192 "uuid": "f9b40102-50ce-40dc-bb98-6b3118628239", 00:13:32.192 "assigned_rate_limits": { 00:13:32.192 "rw_ios_per_sec": 0, 00:13:32.192 "rw_mbytes_per_sec": 0, 00:13:32.192 "r_mbytes_per_sec": 0, 00:13:32.192 "w_mbytes_per_sec": 0 00:13:32.192 }, 00:13:32.192 "claimed": true, 00:13:32.192 "claim_type": "exclusive_write", 00:13:32.192 "zoned": false, 00:13:32.192 "supported_io_types": { 00:13:32.192 "read": true, 00:13:32.192 "write": true, 00:13:32.192 "unmap": true, 00:13:32.192 "flush": true, 00:13:32.192 "reset": true, 00:13:32.192 "nvme_admin": false, 00:13:32.192 "nvme_io": false, 00:13:32.192 "nvme_io_md": false, 00:13:32.192 "write_zeroes": true, 00:13:32.192 "zcopy": true, 00:13:32.192 "get_zone_info": false, 00:13:32.192 "zone_management": false, 00:13:32.192 "zone_append": false, 00:13:32.192 "compare": false, 00:13:32.192 "compare_and_write": false, 00:13:32.192 "abort": true, 00:13:32.192 "seek_hole": false, 00:13:32.192 "seek_data": false, 00:13:32.192 "copy": true, 00:13:32.192 "nvme_iov_md": false 00:13:32.192 }, 00:13:32.192 "memory_domains": [ 00:13:32.192 { 00:13:32.192 "dma_device_id": "system", 00:13:32.192 "dma_device_type": 1 00:13:32.192 }, 00:13:32.192 { 00:13:32.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:32.192 "dma_device_type": 2 00:13:32.192 } 00:13:32.192 ], 00:13:32.192 "driver_specific": {} 00:13:32.192 } 00:13:32.192 ] 00:13:32.192 18:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.192 18:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:32.192 18:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:32.192 18:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:32.192 18:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:32.192 18:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:32.192 18:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:32.192 18:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:32.193 18:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:32.193 18:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:32.193 18:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.193 18:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.193 18:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.193 18:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.193 18:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.193 18:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:32.193 18:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.193 18:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.193 18:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.193 18:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.193 "name": "Existed_Raid", 00:13:32.193 "uuid": "9b53837d-b78f-4f8e-9f58-48ae283b7cc6", 00:13:32.193 "strip_size_kb": 64, 00:13:32.193 "state": "online", 00:13:32.193 "raid_level": "raid0", 00:13:32.193 "superblock": true, 00:13:32.193 "num_base_bdevs": 4, 00:13:32.193 "num_base_bdevs_discovered": 4, 00:13:32.193 "num_base_bdevs_operational": 4, 00:13:32.193 "base_bdevs_list": [ 00:13:32.193 { 00:13:32.193 "name": "BaseBdev1", 00:13:32.193 "uuid": "25bda460-dcfa-437f-994a-a244076edf8d", 00:13:32.193 "is_configured": true, 00:13:32.193 "data_offset": 2048, 00:13:32.193 "data_size": 63488 00:13:32.193 }, 00:13:32.193 { 00:13:32.193 "name": "BaseBdev2", 00:13:32.193 "uuid": "1c3bdceb-753d-4300-879d-43ea9ffaad08", 00:13:32.193 "is_configured": true, 00:13:32.193 "data_offset": 2048, 00:13:32.193 "data_size": 63488 00:13:32.193 }, 00:13:32.193 { 00:13:32.193 "name": "BaseBdev3", 00:13:32.193 "uuid": "09281549-c717-4c65-9dfc-4d79bccbb2b9", 00:13:32.193 "is_configured": true, 00:13:32.193 "data_offset": 2048, 00:13:32.193 "data_size": 63488 00:13:32.193 }, 00:13:32.193 { 00:13:32.193 "name": "BaseBdev4", 00:13:32.193 "uuid": "f9b40102-50ce-40dc-bb98-6b3118628239", 00:13:32.193 "is_configured": true, 00:13:32.193 "data_offset": 2048, 00:13:32.193 "data_size": 63488 00:13:32.193 } 00:13:32.193 ] 00:13:32.193 }' 00:13:32.193 18:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.193 18:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.814 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:32.814 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:32.814 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:32.814 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:32.814 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:32.814 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:32.814 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:32.814 18:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.814 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:32.814 18:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.814 [2024-12-06 18:11:58.096488] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:32.814 18:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.814 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:32.814 "name": "Existed_Raid", 00:13:32.814 "aliases": [ 00:13:32.814 "9b53837d-b78f-4f8e-9f58-48ae283b7cc6" 00:13:32.814 ], 00:13:32.814 "product_name": "Raid Volume", 00:13:32.814 "block_size": 512, 00:13:32.814 "num_blocks": 253952, 00:13:32.814 "uuid": "9b53837d-b78f-4f8e-9f58-48ae283b7cc6", 00:13:32.814 "assigned_rate_limits": { 00:13:32.814 "rw_ios_per_sec": 0, 00:13:32.814 "rw_mbytes_per_sec": 0, 00:13:32.814 "r_mbytes_per_sec": 0, 00:13:32.814 "w_mbytes_per_sec": 0 00:13:32.814 }, 00:13:32.814 "claimed": false, 00:13:32.814 "zoned": false, 00:13:32.814 "supported_io_types": { 00:13:32.814 "read": true, 00:13:32.814 "write": true, 00:13:32.814 "unmap": true, 00:13:32.814 "flush": true, 00:13:32.814 "reset": true, 00:13:32.814 "nvme_admin": false, 00:13:32.814 "nvme_io": false, 00:13:32.814 "nvme_io_md": false, 00:13:32.814 "write_zeroes": true, 00:13:32.814 "zcopy": false, 00:13:32.814 "get_zone_info": false, 00:13:32.815 "zone_management": false, 00:13:32.815 "zone_append": false, 00:13:32.815 "compare": false, 00:13:32.815 "compare_and_write": false, 00:13:32.815 "abort": false, 00:13:32.815 "seek_hole": false, 00:13:32.815 "seek_data": false, 00:13:32.815 "copy": false, 00:13:32.815 "nvme_iov_md": false 00:13:32.815 }, 00:13:32.815 "memory_domains": [ 00:13:32.815 { 00:13:32.815 "dma_device_id": "system", 00:13:32.815 "dma_device_type": 1 00:13:32.815 }, 00:13:32.815 { 00:13:32.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:32.815 "dma_device_type": 2 00:13:32.815 }, 00:13:32.815 { 00:13:32.815 "dma_device_id": "system", 00:13:32.815 "dma_device_type": 1 00:13:32.815 }, 00:13:32.815 { 00:13:32.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:32.815 "dma_device_type": 2 00:13:32.815 }, 00:13:32.815 { 00:13:32.815 "dma_device_id": "system", 00:13:32.815 "dma_device_type": 1 00:13:32.815 }, 00:13:32.815 { 00:13:32.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:32.815 "dma_device_type": 2 00:13:32.815 }, 00:13:32.815 { 00:13:32.815 "dma_device_id": "system", 00:13:32.815 "dma_device_type": 1 00:13:32.815 }, 00:13:32.815 { 00:13:32.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:32.815 "dma_device_type": 2 00:13:32.815 } 00:13:32.815 ], 00:13:32.815 "driver_specific": { 00:13:32.815 "raid": { 00:13:32.815 "uuid": "9b53837d-b78f-4f8e-9f58-48ae283b7cc6", 00:13:32.815 "strip_size_kb": 64, 00:13:32.815 "state": "online", 00:13:32.815 "raid_level": "raid0", 00:13:32.815 "superblock": true, 00:13:32.815 "num_base_bdevs": 4, 00:13:32.815 "num_base_bdevs_discovered": 4, 00:13:32.815 "num_base_bdevs_operational": 4, 00:13:32.815 "base_bdevs_list": [ 00:13:32.815 { 00:13:32.815 "name": "BaseBdev1", 00:13:32.815 "uuid": "25bda460-dcfa-437f-994a-a244076edf8d", 00:13:32.815 "is_configured": true, 00:13:32.815 "data_offset": 2048, 00:13:32.815 "data_size": 63488 00:13:32.815 }, 00:13:32.815 { 00:13:32.815 "name": "BaseBdev2", 00:13:32.815 "uuid": "1c3bdceb-753d-4300-879d-43ea9ffaad08", 00:13:32.815 "is_configured": true, 00:13:32.815 "data_offset": 2048, 00:13:32.815 "data_size": 63488 00:13:32.815 }, 00:13:32.815 { 00:13:32.815 "name": "BaseBdev3", 00:13:32.815 "uuid": "09281549-c717-4c65-9dfc-4d79bccbb2b9", 00:13:32.815 "is_configured": true, 00:13:32.815 "data_offset": 2048, 00:13:32.815 "data_size": 63488 00:13:32.815 }, 00:13:32.815 { 00:13:32.815 "name": "BaseBdev4", 00:13:32.815 "uuid": "f9b40102-50ce-40dc-bb98-6b3118628239", 00:13:32.815 "is_configured": true, 00:13:32.815 "data_offset": 2048, 00:13:32.815 "data_size": 63488 00:13:32.815 } 00:13:32.815 ] 00:13:32.815 } 00:13:32.815 } 00:13:32.815 }' 00:13:32.815 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:32.815 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:32.815 BaseBdev2 00:13:32.815 BaseBdev3 00:13:32.815 BaseBdev4' 00:13:32.815 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:32.815 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:32.815 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:32.815 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:32.815 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:32.815 18:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.815 18:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.815 18:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.815 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:32.815 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:32.815 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:32.815 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:32.815 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:32.815 18:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.815 18:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.815 18:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.074 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:33.074 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:33.074 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:33.074 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:33.074 18:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.074 18:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.074 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:33.074 18:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.074 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:33.074 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:33.074 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:33.074 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:33.074 18:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.074 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:33.074 18:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.074 18:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.074 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:33.074 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:33.074 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:33.074 18:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.074 18:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.074 [2024-12-06 18:11:58.468254] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:33.074 [2024-12-06 18:11:58.468482] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:33.074 [2024-12-06 18:11:58.468665] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:33.074 18:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.074 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:33.074 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:13:33.074 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:33.074 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:13:33.074 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:33.074 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:13:33.074 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:33.074 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:33.074 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:33.075 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:33.075 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:33.075 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.075 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.075 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.075 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.075 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.075 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:33.075 18:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.075 18:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.075 18:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.335 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.335 "name": "Existed_Raid", 00:13:33.335 "uuid": "9b53837d-b78f-4f8e-9f58-48ae283b7cc6", 00:13:33.335 "strip_size_kb": 64, 00:13:33.335 "state": "offline", 00:13:33.335 "raid_level": "raid0", 00:13:33.335 "superblock": true, 00:13:33.335 "num_base_bdevs": 4, 00:13:33.335 "num_base_bdevs_discovered": 3, 00:13:33.335 "num_base_bdevs_operational": 3, 00:13:33.335 "base_bdevs_list": [ 00:13:33.335 { 00:13:33.335 "name": null, 00:13:33.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.335 "is_configured": false, 00:13:33.335 "data_offset": 0, 00:13:33.335 "data_size": 63488 00:13:33.335 }, 00:13:33.335 { 00:13:33.335 "name": "BaseBdev2", 00:13:33.335 "uuid": "1c3bdceb-753d-4300-879d-43ea9ffaad08", 00:13:33.335 "is_configured": true, 00:13:33.335 "data_offset": 2048, 00:13:33.335 "data_size": 63488 00:13:33.335 }, 00:13:33.335 { 00:13:33.335 "name": "BaseBdev3", 00:13:33.335 "uuid": "09281549-c717-4c65-9dfc-4d79bccbb2b9", 00:13:33.335 "is_configured": true, 00:13:33.335 "data_offset": 2048, 00:13:33.335 "data_size": 63488 00:13:33.335 }, 00:13:33.335 { 00:13:33.335 "name": "BaseBdev4", 00:13:33.335 "uuid": "f9b40102-50ce-40dc-bb98-6b3118628239", 00:13:33.335 "is_configured": true, 00:13:33.335 "data_offset": 2048, 00:13:33.335 "data_size": 63488 00:13:33.335 } 00:13:33.335 ] 00:13:33.335 }' 00:13:33.335 18:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.335 18:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.594 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:33.594 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:33.594 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:33.594 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.594 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.594 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.594 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.865 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:33.866 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:33.866 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:33.866 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.866 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.866 [2024-12-06 18:11:59.150905] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:33.866 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.866 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:33.866 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:33.866 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.866 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:33.866 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.866 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.866 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.866 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:33.866 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:33.866 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:33.866 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.866 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.866 [2024-12-06 18:11:59.292251] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:33.866 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.866 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:33.866 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:33.866 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:33.866 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.866 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.866 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.125 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.125 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:34.125 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:34.125 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:34.126 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.126 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.126 [2024-12-06 18:11:59.437164] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:34.126 [2024-12-06 18:11:59.437388] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:34.126 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.126 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:34.126 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:34.126 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.126 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.126 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:34.126 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.126 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.126 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:34.126 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:34.126 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:34.126 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:34.126 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:34.126 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:34.126 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.126 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.126 BaseBdev2 00:13:34.126 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.126 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:34.126 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:34.126 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:34.126 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:34.126 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:34.126 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:34.126 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:34.126 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.126 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.126 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.126 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:34.126 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.126 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.386 [ 00:13:34.386 { 00:13:34.386 "name": "BaseBdev2", 00:13:34.386 "aliases": [ 00:13:34.386 "816cc8bb-dbf1-455f-95f8-884753e44f9a" 00:13:34.386 ], 00:13:34.386 "product_name": "Malloc disk", 00:13:34.386 "block_size": 512, 00:13:34.386 "num_blocks": 65536, 00:13:34.386 "uuid": "816cc8bb-dbf1-455f-95f8-884753e44f9a", 00:13:34.386 "assigned_rate_limits": { 00:13:34.386 "rw_ios_per_sec": 0, 00:13:34.386 "rw_mbytes_per_sec": 0, 00:13:34.386 "r_mbytes_per_sec": 0, 00:13:34.386 "w_mbytes_per_sec": 0 00:13:34.386 }, 00:13:34.386 "claimed": false, 00:13:34.386 "zoned": false, 00:13:34.386 "supported_io_types": { 00:13:34.386 "read": true, 00:13:34.386 "write": true, 00:13:34.386 "unmap": true, 00:13:34.386 "flush": true, 00:13:34.386 "reset": true, 00:13:34.386 "nvme_admin": false, 00:13:34.386 "nvme_io": false, 00:13:34.386 "nvme_io_md": false, 00:13:34.386 "write_zeroes": true, 00:13:34.386 "zcopy": true, 00:13:34.386 "get_zone_info": false, 00:13:34.386 "zone_management": false, 00:13:34.386 "zone_append": false, 00:13:34.386 "compare": false, 00:13:34.386 "compare_and_write": false, 00:13:34.386 "abort": true, 00:13:34.386 "seek_hole": false, 00:13:34.386 "seek_data": false, 00:13:34.386 "copy": true, 00:13:34.386 "nvme_iov_md": false 00:13:34.386 }, 00:13:34.386 "memory_domains": [ 00:13:34.386 { 00:13:34.386 "dma_device_id": "system", 00:13:34.386 "dma_device_type": 1 00:13:34.386 }, 00:13:34.386 { 00:13:34.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.386 "dma_device_type": 2 00:13:34.386 } 00:13:34.386 ], 00:13:34.386 "driver_specific": {} 00:13:34.386 } 00:13:34.386 ] 00:13:34.386 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.386 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:34.386 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:34.386 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:34.386 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:34.386 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.386 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.386 BaseBdev3 00:13:34.386 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.386 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:34.386 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:34.386 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:34.386 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:34.386 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:34.386 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:34.386 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:34.386 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.386 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.386 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.386 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:34.386 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.386 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.386 [ 00:13:34.386 { 00:13:34.386 "name": "BaseBdev3", 00:13:34.386 "aliases": [ 00:13:34.386 "a5a74b2c-b447-4667-a3da-c4220f7819aa" 00:13:34.386 ], 00:13:34.386 "product_name": "Malloc disk", 00:13:34.386 "block_size": 512, 00:13:34.386 "num_blocks": 65536, 00:13:34.386 "uuid": "a5a74b2c-b447-4667-a3da-c4220f7819aa", 00:13:34.386 "assigned_rate_limits": { 00:13:34.386 "rw_ios_per_sec": 0, 00:13:34.386 "rw_mbytes_per_sec": 0, 00:13:34.386 "r_mbytes_per_sec": 0, 00:13:34.386 "w_mbytes_per_sec": 0 00:13:34.386 }, 00:13:34.386 "claimed": false, 00:13:34.386 "zoned": false, 00:13:34.386 "supported_io_types": { 00:13:34.386 "read": true, 00:13:34.386 "write": true, 00:13:34.386 "unmap": true, 00:13:34.386 "flush": true, 00:13:34.386 "reset": true, 00:13:34.386 "nvme_admin": false, 00:13:34.386 "nvme_io": false, 00:13:34.386 "nvme_io_md": false, 00:13:34.386 "write_zeroes": true, 00:13:34.386 "zcopy": true, 00:13:34.386 "get_zone_info": false, 00:13:34.386 "zone_management": false, 00:13:34.386 "zone_append": false, 00:13:34.386 "compare": false, 00:13:34.386 "compare_and_write": false, 00:13:34.386 "abort": true, 00:13:34.386 "seek_hole": false, 00:13:34.386 "seek_data": false, 00:13:34.386 "copy": true, 00:13:34.386 "nvme_iov_md": false 00:13:34.386 }, 00:13:34.386 "memory_domains": [ 00:13:34.386 { 00:13:34.386 "dma_device_id": "system", 00:13:34.386 "dma_device_type": 1 00:13:34.386 }, 00:13:34.386 { 00:13:34.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.386 "dma_device_type": 2 00:13:34.386 } 00:13:34.386 ], 00:13:34.386 "driver_specific": {} 00:13:34.386 } 00:13:34.386 ] 00:13:34.386 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.386 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:34.386 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:34.386 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:34.386 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:34.386 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.386 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.386 BaseBdev4 00:13:34.386 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.386 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:34.386 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:34.386 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:34.387 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:34.387 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:34.387 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:34.387 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:34.387 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.387 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.387 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.387 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:34.387 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.387 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.387 [ 00:13:34.387 { 00:13:34.387 "name": "BaseBdev4", 00:13:34.387 "aliases": [ 00:13:34.387 "b41772c1-2fa9-4022-a798-a26196d379f4" 00:13:34.387 ], 00:13:34.387 "product_name": "Malloc disk", 00:13:34.387 "block_size": 512, 00:13:34.387 "num_blocks": 65536, 00:13:34.387 "uuid": "b41772c1-2fa9-4022-a798-a26196d379f4", 00:13:34.387 "assigned_rate_limits": { 00:13:34.387 "rw_ios_per_sec": 0, 00:13:34.387 "rw_mbytes_per_sec": 0, 00:13:34.387 "r_mbytes_per_sec": 0, 00:13:34.387 "w_mbytes_per_sec": 0 00:13:34.387 }, 00:13:34.387 "claimed": false, 00:13:34.387 "zoned": false, 00:13:34.387 "supported_io_types": { 00:13:34.387 "read": true, 00:13:34.387 "write": true, 00:13:34.387 "unmap": true, 00:13:34.387 "flush": true, 00:13:34.387 "reset": true, 00:13:34.387 "nvme_admin": false, 00:13:34.387 "nvme_io": false, 00:13:34.387 "nvme_io_md": false, 00:13:34.387 "write_zeroes": true, 00:13:34.387 "zcopy": true, 00:13:34.387 "get_zone_info": false, 00:13:34.387 "zone_management": false, 00:13:34.387 "zone_append": false, 00:13:34.387 "compare": false, 00:13:34.387 "compare_and_write": false, 00:13:34.387 "abort": true, 00:13:34.387 "seek_hole": false, 00:13:34.387 "seek_data": false, 00:13:34.387 "copy": true, 00:13:34.387 "nvme_iov_md": false 00:13:34.387 }, 00:13:34.387 "memory_domains": [ 00:13:34.387 { 00:13:34.387 "dma_device_id": "system", 00:13:34.387 "dma_device_type": 1 00:13:34.387 }, 00:13:34.387 { 00:13:34.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.387 "dma_device_type": 2 00:13:34.387 } 00:13:34.387 ], 00:13:34.387 "driver_specific": {} 00:13:34.387 } 00:13:34.387 ] 00:13:34.387 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.387 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:34.387 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:34.387 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:34.387 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:34.387 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.387 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.387 [2024-12-06 18:11:59.795768] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:34.387 [2024-12-06 18:11:59.795993] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:34.387 [2024-12-06 18:11:59.796126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:34.387 [2024-12-06 18:11:59.798735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:34.387 [2024-12-06 18:11:59.799765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:34.387 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.387 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:34.387 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:34.387 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:34.387 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:34.387 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:34.387 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:34.387 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.387 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.387 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.387 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.387 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.387 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.387 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.387 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.387 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.387 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.387 "name": "Existed_Raid", 00:13:34.387 "uuid": "373f8761-8b21-466d-96e5-9eb9664755a4", 00:13:34.387 "strip_size_kb": 64, 00:13:34.387 "state": "configuring", 00:13:34.387 "raid_level": "raid0", 00:13:34.387 "superblock": true, 00:13:34.387 "num_base_bdevs": 4, 00:13:34.387 "num_base_bdevs_discovered": 3, 00:13:34.387 "num_base_bdevs_operational": 4, 00:13:34.387 "base_bdevs_list": [ 00:13:34.387 { 00:13:34.387 "name": "BaseBdev1", 00:13:34.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.387 "is_configured": false, 00:13:34.387 "data_offset": 0, 00:13:34.387 "data_size": 0 00:13:34.387 }, 00:13:34.387 { 00:13:34.387 "name": "BaseBdev2", 00:13:34.387 "uuid": "816cc8bb-dbf1-455f-95f8-884753e44f9a", 00:13:34.387 "is_configured": true, 00:13:34.387 "data_offset": 2048, 00:13:34.387 "data_size": 63488 00:13:34.387 }, 00:13:34.387 { 00:13:34.387 "name": "BaseBdev3", 00:13:34.387 "uuid": "a5a74b2c-b447-4667-a3da-c4220f7819aa", 00:13:34.387 "is_configured": true, 00:13:34.387 "data_offset": 2048, 00:13:34.387 "data_size": 63488 00:13:34.387 }, 00:13:34.387 { 00:13:34.387 "name": "BaseBdev4", 00:13:34.387 "uuid": "b41772c1-2fa9-4022-a798-a26196d379f4", 00:13:34.387 "is_configured": true, 00:13:34.387 "data_offset": 2048, 00:13:34.387 "data_size": 63488 00:13:34.387 } 00:13:34.387 ] 00:13:34.387 }' 00:13:34.387 18:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.387 18:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.972 18:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:34.972 18:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.972 18:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.972 [2024-12-06 18:12:00.352229] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:34.972 18:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.972 18:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:34.972 18:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:34.972 18:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:34.972 18:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:34.972 18:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:34.972 18:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:34.972 18:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.972 18:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.972 18:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.972 18:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.972 18:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.972 18:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.972 18:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.972 18:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.972 18:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.972 18:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.972 "name": "Existed_Raid", 00:13:34.972 "uuid": "373f8761-8b21-466d-96e5-9eb9664755a4", 00:13:34.972 "strip_size_kb": 64, 00:13:34.972 "state": "configuring", 00:13:34.972 "raid_level": "raid0", 00:13:34.972 "superblock": true, 00:13:34.972 "num_base_bdevs": 4, 00:13:34.972 "num_base_bdevs_discovered": 2, 00:13:34.972 "num_base_bdevs_operational": 4, 00:13:34.972 "base_bdevs_list": [ 00:13:34.972 { 00:13:34.972 "name": "BaseBdev1", 00:13:34.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.972 "is_configured": false, 00:13:34.972 "data_offset": 0, 00:13:34.972 "data_size": 0 00:13:34.972 }, 00:13:34.972 { 00:13:34.972 "name": null, 00:13:34.972 "uuid": "816cc8bb-dbf1-455f-95f8-884753e44f9a", 00:13:34.972 "is_configured": false, 00:13:34.972 "data_offset": 0, 00:13:34.972 "data_size": 63488 00:13:34.972 }, 00:13:34.972 { 00:13:34.972 "name": "BaseBdev3", 00:13:34.972 "uuid": "a5a74b2c-b447-4667-a3da-c4220f7819aa", 00:13:34.972 "is_configured": true, 00:13:34.972 "data_offset": 2048, 00:13:34.972 "data_size": 63488 00:13:34.972 }, 00:13:34.972 { 00:13:34.973 "name": "BaseBdev4", 00:13:34.973 "uuid": "b41772c1-2fa9-4022-a798-a26196d379f4", 00:13:34.973 "is_configured": true, 00:13:34.973 "data_offset": 2048, 00:13:34.973 "data_size": 63488 00:13:34.973 } 00:13:34.973 ] 00:13:34.973 }' 00:13:34.973 18:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.973 18:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.540 18:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:35.540 18:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.540 18:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.540 18:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.540 18:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.540 18:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:35.540 18:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:35.540 18:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.540 18:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.540 [2024-12-06 18:12:00.970860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:35.540 BaseBdev1 00:13:35.540 18:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.540 18:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:35.540 18:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:35.540 18:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:35.540 18:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:35.540 18:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:35.540 18:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:35.540 18:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:35.540 18:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.540 18:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.540 18:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.540 18:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:35.540 18:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.540 18:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.540 [ 00:13:35.540 { 00:13:35.540 "name": "BaseBdev1", 00:13:35.540 "aliases": [ 00:13:35.540 "326ae1e9-10ec-431e-8849-316ad786f5bd" 00:13:35.540 ], 00:13:35.540 "product_name": "Malloc disk", 00:13:35.540 "block_size": 512, 00:13:35.540 "num_blocks": 65536, 00:13:35.540 "uuid": "326ae1e9-10ec-431e-8849-316ad786f5bd", 00:13:35.540 "assigned_rate_limits": { 00:13:35.540 "rw_ios_per_sec": 0, 00:13:35.540 "rw_mbytes_per_sec": 0, 00:13:35.540 "r_mbytes_per_sec": 0, 00:13:35.540 "w_mbytes_per_sec": 0 00:13:35.540 }, 00:13:35.540 "claimed": true, 00:13:35.540 "claim_type": "exclusive_write", 00:13:35.540 "zoned": false, 00:13:35.540 "supported_io_types": { 00:13:35.540 "read": true, 00:13:35.540 "write": true, 00:13:35.540 "unmap": true, 00:13:35.540 "flush": true, 00:13:35.540 "reset": true, 00:13:35.540 "nvme_admin": false, 00:13:35.540 "nvme_io": false, 00:13:35.540 "nvme_io_md": false, 00:13:35.540 "write_zeroes": true, 00:13:35.540 "zcopy": true, 00:13:35.540 "get_zone_info": false, 00:13:35.540 "zone_management": false, 00:13:35.540 "zone_append": false, 00:13:35.540 "compare": false, 00:13:35.540 "compare_and_write": false, 00:13:35.540 "abort": true, 00:13:35.540 "seek_hole": false, 00:13:35.540 "seek_data": false, 00:13:35.540 "copy": true, 00:13:35.540 "nvme_iov_md": false 00:13:35.540 }, 00:13:35.540 "memory_domains": [ 00:13:35.540 { 00:13:35.540 "dma_device_id": "system", 00:13:35.540 "dma_device_type": 1 00:13:35.540 }, 00:13:35.540 { 00:13:35.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.540 "dma_device_type": 2 00:13:35.540 } 00:13:35.540 ], 00:13:35.540 "driver_specific": {} 00:13:35.540 } 00:13:35.540 ] 00:13:35.540 18:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.540 18:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:35.540 18:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:35.540 18:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:35.540 18:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:35.540 18:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:35.540 18:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:35.540 18:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:35.541 18:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.541 18:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.541 18:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.541 18:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.541 18:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.541 18:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.541 18:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.541 18:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:35.541 18:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.541 18:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.541 "name": "Existed_Raid", 00:13:35.541 "uuid": "373f8761-8b21-466d-96e5-9eb9664755a4", 00:13:35.541 "strip_size_kb": 64, 00:13:35.541 "state": "configuring", 00:13:35.541 "raid_level": "raid0", 00:13:35.541 "superblock": true, 00:13:35.541 "num_base_bdevs": 4, 00:13:35.541 "num_base_bdevs_discovered": 3, 00:13:35.541 "num_base_bdevs_operational": 4, 00:13:35.541 "base_bdevs_list": [ 00:13:35.541 { 00:13:35.541 "name": "BaseBdev1", 00:13:35.541 "uuid": "326ae1e9-10ec-431e-8849-316ad786f5bd", 00:13:35.541 "is_configured": true, 00:13:35.541 "data_offset": 2048, 00:13:35.541 "data_size": 63488 00:13:35.541 }, 00:13:35.541 { 00:13:35.541 "name": null, 00:13:35.541 "uuid": "816cc8bb-dbf1-455f-95f8-884753e44f9a", 00:13:35.541 "is_configured": false, 00:13:35.541 "data_offset": 0, 00:13:35.541 "data_size": 63488 00:13:35.541 }, 00:13:35.541 { 00:13:35.541 "name": "BaseBdev3", 00:13:35.541 "uuid": "a5a74b2c-b447-4667-a3da-c4220f7819aa", 00:13:35.541 "is_configured": true, 00:13:35.541 "data_offset": 2048, 00:13:35.541 "data_size": 63488 00:13:35.541 }, 00:13:35.541 { 00:13:35.541 "name": "BaseBdev4", 00:13:35.541 "uuid": "b41772c1-2fa9-4022-a798-a26196d379f4", 00:13:35.541 "is_configured": true, 00:13:35.541 "data_offset": 2048, 00:13:35.541 "data_size": 63488 00:13:35.541 } 00:13:35.541 ] 00:13:35.541 }' 00:13:35.541 18:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.541 18:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.106 18:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.106 18:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.106 18:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.106 18:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:36.106 18:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.106 18:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:36.106 18:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:36.106 18:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.106 18:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.106 [2024-12-06 18:12:01.603106] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:36.106 18:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.106 18:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:36.106 18:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:36.106 18:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:36.106 18:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:36.106 18:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:36.107 18:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:36.107 18:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.107 18:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.107 18:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.107 18:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.107 18:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.107 18:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.107 18:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.107 18:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:36.364 18:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.364 18:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.364 "name": "Existed_Raid", 00:13:36.364 "uuid": "373f8761-8b21-466d-96e5-9eb9664755a4", 00:13:36.364 "strip_size_kb": 64, 00:13:36.364 "state": "configuring", 00:13:36.364 "raid_level": "raid0", 00:13:36.364 "superblock": true, 00:13:36.364 "num_base_bdevs": 4, 00:13:36.364 "num_base_bdevs_discovered": 2, 00:13:36.364 "num_base_bdevs_operational": 4, 00:13:36.364 "base_bdevs_list": [ 00:13:36.364 { 00:13:36.364 "name": "BaseBdev1", 00:13:36.364 "uuid": "326ae1e9-10ec-431e-8849-316ad786f5bd", 00:13:36.364 "is_configured": true, 00:13:36.364 "data_offset": 2048, 00:13:36.364 "data_size": 63488 00:13:36.364 }, 00:13:36.364 { 00:13:36.364 "name": null, 00:13:36.364 "uuid": "816cc8bb-dbf1-455f-95f8-884753e44f9a", 00:13:36.364 "is_configured": false, 00:13:36.364 "data_offset": 0, 00:13:36.364 "data_size": 63488 00:13:36.364 }, 00:13:36.364 { 00:13:36.364 "name": null, 00:13:36.364 "uuid": "a5a74b2c-b447-4667-a3da-c4220f7819aa", 00:13:36.364 "is_configured": false, 00:13:36.365 "data_offset": 0, 00:13:36.365 "data_size": 63488 00:13:36.365 }, 00:13:36.365 { 00:13:36.365 "name": "BaseBdev4", 00:13:36.365 "uuid": "b41772c1-2fa9-4022-a798-a26196d379f4", 00:13:36.365 "is_configured": true, 00:13:36.365 "data_offset": 2048, 00:13:36.365 "data_size": 63488 00:13:36.365 } 00:13:36.365 ] 00:13:36.365 }' 00:13:36.365 18:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.365 18:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.623 18:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:36.623 18:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.623 18:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.623 18:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.881 18:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.881 18:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:36.881 18:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:36.881 18:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.881 18:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.881 [2024-12-06 18:12:02.179275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:36.881 18:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.881 18:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:36.881 18:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:36.881 18:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:36.881 18:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:36.881 18:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:36.882 18:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:36.882 18:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.882 18:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.882 18:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.882 18:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.882 18:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.882 18:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.882 18:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.882 18:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:36.882 18:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.882 18:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.882 "name": "Existed_Raid", 00:13:36.882 "uuid": "373f8761-8b21-466d-96e5-9eb9664755a4", 00:13:36.882 "strip_size_kb": 64, 00:13:36.882 "state": "configuring", 00:13:36.882 "raid_level": "raid0", 00:13:36.882 "superblock": true, 00:13:36.882 "num_base_bdevs": 4, 00:13:36.882 "num_base_bdevs_discovered": 3, 00:13:36.882 "num_base_bdevs_operational": 4, 00:13:36.882 "base_bdevs_list": [ 00:13:36.882 { 00:13:36.882 "name": "BaseBdev1", 00:13:36.882 "uuid": "326ae1e9-10ec-431e-8849-316ad786f5bd", 00:13:36.882 "is_configured": true, 00:13:36.882 "data_offset": 2048, 00:13:36.882 "data_size": 63488 00:13:36.882 }, 00:13:36.882 { 00:13:36.882 "name": null, 00:13:36.882 "uuid": "816cc8bb-dbf1-455f-95f8-884753e44f9a", 00:13:36.882 "is_configured": false, 00:13:36.882 "data_offset": 0, 00:13:36.882 "data_size": 63488 00:13:36.882 }, 00:13:36.882 { 00:13:36.882 "name": "BaseBdev3", 00:13:36.882 "uuid": "a5a74b2c-b447-4667-a3da-c4220f7819aa", 00:13:36.882 "is_configured": true, 00:13:36.882 "data_offset": 2048, 00:13:36.882 "data_size": 63488 00:13:36.882 }, 00:13:36.882 { 00:13:36.882 "name": "BaseBdev4", 00:13:36.882 "uuid": "b41772c1-2fa9-4022-a798-a26196d379f4", 00:13:36.882 "is_configured": true, 00:13:36.882 "data_offset": 2048, 00:13:36.882 "data_size": 63488 00:13:36.882 } 00:13:36.882 ] 00:13:36.882 }' 00:13:36.882 18:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.882 18:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.447 18:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.447 18:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:37.447 18:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.447 18:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.447 18:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.447 18:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:37.447 18:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:37.447 18:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.447 18:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.447 [2024-12-06 18:12:02.759535] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:37.447 18:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.447 18:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:37.447 18:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:37.447 18:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:37.447 18:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:37.447 18:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:37.447 18:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:37.447 18:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.447 18:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.447 18:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.447 18:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.447 18:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.447 18:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.447 18:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.447 18:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:37.447 18:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.447 18:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.447 "name": "Existed_Raid", 00:13:37.447 "uuid": "373f8761-8b21-466d-96e5-9eb9664755a4", 00:13:37.447 "strip_size_kb": 64, 00:13:37.447 "state": "configuring", 00:13:37.447 "raid_level": "raid0", 00:13:37.447 "superblock": true, 00:13:37.447 "num_base_bdevs": 4, 00:13:37.447 "num_base_bdevs_discovered": 2, 00:13:37.447 "num_base_bdevs_operational": 4, 00:13:37.447 "base_bdevs_list": [ 00:13:37.447 { 00:13:37.447 "name": null, 00:13:37.447 "uuid": "326ae1e9-10ec-431e-8849-316ad786f5bd", 00:13:37.447 "is_configured": false, 00:13:37.447 "data_offset": 0, 00:13:37.447 "data_size": 63488 00:13:37.447 }, 00:13:37.447 { 00:13:37.447 "name": null, 00:13:37.447 "uuid": "816cc8bb-dbf1-455f-95f8-884753e44f9a", 00:13:37.447 "is_configured": false, 00:13:37.447 "data_offset": 0, 00:13:37.447 "data_size": 63488 00:13:37.447 }, 00:13:37.447 { 00:13:37.447 "name": "BaseBdev3", 00:13:37.447 "uuid": "a5a74b2c-b447-4667-a3da-c4220f7819aa", 00:13:37.447 "is_configured": true, 00:13:37.447 "data_offset": 2048, 00:13:37.447 "data_size": 63488 00:13:37.447 }, 00:13:37.447 { 00:13:37.447 "name": "BaseBdev4", 00:13:37.447 "uuid": "b41772c1-2fa9-4022-a798-a26196d379f4", 00:13:37.447 "is_configured": true, 00:13:37.447 "data_offset": 2048, 00:13:37.447 "data_size": 63488 00:13:37.447 } 00:13:37.447 ] 00:13:37.447 }' 00:13:37.447 18:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.447 18:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.013 18:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.013 18:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:38.013 18:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.013 18:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.013 18:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.013 18:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:38.013 18:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:38.013 18:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.013 18:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.013 [2024-12-06 18:12:03.432012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:38.013 18:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.013 18:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:38.013 18:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:38.013 18:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:38.013 18:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:38.013 18:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:38.013 18:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:38.013 18:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.013 18:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.013 18:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.013 18:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.013 18:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.013 18:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.013 18:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:38.013 18:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.013 18:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.013 18:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.013 "name": "Existed_Raid", 00:13:38.013 "uuid": "373f8761-8b21-466d-96e5-9eb9664755a4", 00:13:38.013 "strip_size_kb": 64, 00:13:38.013 "state": "configuring", 00:13:38.013 "raid_level": "raid0", 00:13:38.013 "superblock": true, 00:13:38.013 "num_base_bdevs": 4, 00:13:38.013 "num_base_bdevs_discovered": 3, 00:13:38.013 "num_base_bdevs_operational": 4, 00:13:38.013 "base_bdevs_list": [ 00:13:38.013 { 00:13:38.013 "name": null, 00:13:38.013 "uuid": "326ae1e9-10ec-431e-8849-316ad786f5bd", 00:13:38.013 "is_configured": false, 00:13:38.013 "data_offset": 0, 00:13:38.013 "data_size": 63488 00:13:38.013 }, 00:13:38.013 { 00:13:38.013 "name": "BaseBdev2", 00:13:38.013 "uuid": "816cc8bb-dbf1-455f-95f8-884753e44f9a", 00:13:38.013 "is_configured": true, 00:13:38.013 "data_offset": 2048, 00:13:38.013 "data_size": 63488 00:13:38.013 }, 00:13:38.013 { 00:13:38.013 "name": "BaseBdev3", 00:13:38.013 "uuid": "a5a74b2c-b447-4667-a3da-c4220f7819aa", 00:13:38.013 "is_configured": true, 00:13:38.013 "data_offset": 2048, 00:13:38.013 "data_size": 63488 00:13:38.013 }, 00:13:38.013 { 00:13:38.013 "name": "BaseBdev4", 00:13:38.013 "uuid": "b41772c1-2fa9-4022-a798-a26196d379f4", 00:13:38.013 "is_configured": true, 00:13:38.013 "data_offset": 2048, 00:13:38.013 "data_size": 63488 00:13:38.013 } 00:13:38.013 ] 00:13:38.013 }' 00:13:38.013 18:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.013 18:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.578 18:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.579 18:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:38.579 18:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.579 18:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.579 18:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.579 18:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:38.579 18:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.579 18:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:38.579 18:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.579 18:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.579 18:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.579 18:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 326ae1e9-10ec-431e-8849-316ad786f5bd 00:13:38.579 18:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.579 18:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.579 [2024-12-06 18:12:04.095338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:38.579 NewBaseBdev 00:13:38.579 [2024-12-06 18:12:04.096032] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:38.579 [2024-12-06 18:12:04.096057] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:38.579 [2024-12-06 18:12:04.096447] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:38.579 [2024-12-06 18:12:04.096631] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:38.579 [2024-12-06 18:12:04.096651] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:38.579 [2024-12-06 18:12:04.096835] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:38.579 18:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.837 18:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:38.837 18:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:38.837 18:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:38.837 18:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:38.837 18:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:38.837 18:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:38.837 18:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:38.837 18:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.837 18:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.837 18:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.838 18:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:38.838 18:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.838 18:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.838 [ 00:13:38.838 { 00:13:38.838 "name": "NewBaseBdev", 00:13:38.838 "aliases": [ 00:13:38.838 "326ae1e9-10ec-431e-8849-316ad786f5bd" 00:13:38.838 ], 00:13:38.838 "product_name": "Malloc disk", 00:13:38.838 "block_size": 512, 00:13:38.838 "num_blocks": 65536, 00:13:38.838 "uuid": "326ae1e9-10ec-431e-8849-316ad786f5bd", 00:13:38.838 "assigned_rate_limits": { 00:13:38.838 "rw_ios_per_sec": 0, 00:13:38.838 "rw_mbytes_per_sec": 0, 00:13:38.838 "r_mbytes_per_sec": 0, 00:13:38.838 "w_mbytes_per_sec": 0 00:13:38.838 }, 00:13:38.838 "claimed": true, 00:13:38.838 "claim_type": "exclusive_write", 00:13:38.838 "zoned": false, 00:13:38.838 "supported_io_types": { 00:13:38.838 "read": true, 00:13:38.838 "write": true, 00:13:38.838 "unmap": true, 00:13:38.838 "flush": true, 00:13:38.838 "reset": true, 00:13:38.838 "nvme_admin": false, 00:13:38.838 "nvme_io": false, 00:13:38.838 "nvme_io_md": false, 00:13:38.838 "write_zeroes": true, 00:13:38.838 "zcopy": true, 00:13:38.838 "get_zone_info": false, 00:13:38.838 "zone_management": false, 00:13:38.838 "zone_append": false, 00:13:38.838 "compare": false, 00:13:38.838 "compare_and_write": false, 00:13:38.838 "abort": true, 00:13:38.838 "seek_hole": false, 00:13:38.838 "seek_data": false, 00:13:38.838 "copy": true, 00:13:38.838 "nvme_iov_md": false 00:13:38.838 }, 00:13:38.838 "memory_domains": [ 00:13:38.838 { 00:13:38.838 "dma_device_id": "system", 00:13:38.838 "dma_device_type": 1 00:13:38.838 }, 00:13:38.838 { 00:13:38.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.838 "dma_device_type": 2 00:13:38.838 } 00:13:38.838 ], 00:13:38.838 "driver_specific": {} 00:13:38.838 } 00:13:38.838 ] 00:13:38.838 18:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.838 18:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:38.838 18:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:38.838 18:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:38.838 18:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:38.838 18:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:38.838 18:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:38.838 18:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:38.838 18:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.838 18:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.838 18:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.838 18:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.838 18:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:38.838 18:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.838 18:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.838 18:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.838 18:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.838 18:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.838 "name": "Existed_Raid", 00:13:38.838 "uuid": "373f8761-8b21-466d-96e5-9eb9664755a4", 00:13:38.838 "strip_size_kb": 64, 00:13:38.838 "state": "online", 00:13:38.838 "raid_level": "raid0", 00:13:38.838 "superblock": true, 00:13:38.838 "num_base_bdevs": 4, 00:13:38.838 "num_base_bdevs_discovered": 4, 00:13:38.838 "num_base_bdevs_operational": 4, 00:13:38.838 "base_bdevs_list": [ 00:13:38.838 { 00:13:38.838 "name": "NewBaseBdev", 00:13:38.838 "uuid": "326ae1e9-10ec-431e-8849-316ad786f5bd", 00:13:38.838 "is_configured": true, 00:13:38.838 "data_offset": 2048, 00:13:38.838 "data_size": 63488 00:13:38.838 }, 00:13:38.838 { 00:13:38.838 "name": "BaseBdev2", 00:13:38.838 "uuid": "816cc8bb-dbf1-455f-95f8-884753e44f9a", 00:13:38.838 "is_configured": true, 00:13:38.838 "data_offset": 2048, 00:13:38.838 "data_size": 63488 00:13:38.838 }, 00:13:38.838 { 00:13:38.838 "name": "BaseBdev3", 00:13:38.838 "uuid": "a5a74b2c-b447-4667-a3da-c4220f7819aa", 00:13:38.838 "is_configured": true, 00:13:38.838 "data_offset": 2048, 00:13:38.838 "data_size": 63488 00:13:38.838 }, 00:13:38.838 { 00:13:38.838 "name": "BaseBdev4", 00:13:38.838 "uuid": "b41772c1-2fa9-4022-a798-a26196d379f4", 00:13:38.838 "is_configured": true, 00:13:38.838 "data_offset": 2048, 00:13:38.838 "data_size": 63488 00:13:38.838 } 00:13:38.838 ] 00:13:38.838 }' 00:13:38.838 18:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.838 18:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.404 18:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:39.404 18:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:39.404 18:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:39.404 18:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:39.404 18:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:39.404 18:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:39.404 18:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:39.404 18:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:39.404 18:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.404 18:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.404 [2024-12-06 18:12:04.660031] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:39.404 18:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.404 18:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:39.404 "name": "Existed_Raid", 00:13:39.404 "aliases": [ 00:13:39.404 "373f8761-8b21-466d-96e5-9eb9664755a4" 00:13:39.404 ], 00:13:39.404 "product_name": "Raid Volume", 00:13:39.404 "block_size": 512, 00:13:39.404 "num_blocks": 253952, 00:13:39.404 "uuid": "373f8761-8b21-466d-96e5-9eb9664755a4", 00:13:39.404 "assigned_rate_limits": { 00:13:39.404 "rw_ios_per_sec": 0, 00:13:39.404 "rw_mbytes_per_sec": 0, 00:13:39.404 "r_mbytes_per_sec": 0, 00:13:39.404 "w_mbytes_per_sec": 0 00:13:39.404 }, 00:13:39.404 "claimed": false, 00:13:39.404 "zoned": false, 00:13:39.404 "supported_io_types": { 00:13:39.404 "read": true, 00:13:39.404 "write": true, 00:13:39.404 "unmap": true, 00:13:39.404 "flush": true, 00:13:39.404 "reset": true, 00:13:39.404 "nvme_admin": false, 00:13:39.404 "nvme_io": false, 00:13:39.404 "nvme_io_md": false, 00:13:39.404 "write_zeroes": true, 00:13:39.404 "zcopy": false, 00:13:39.404 "get_zone_info": false, 00:13:39.404 "zone_management": false, 00:13:39.404 "zone_append": false, 00:13:39.404 "compare": false, 00:13:39.404 "compare_and_write": false, 00:13:39.404 "abort": false, 00:13:39.404 "seek_hole": false, 00:13:39.404 "seek_data": false, 00:13:39.404 "copy": false, 00:13:39.404 "nvme_iov_md": false 00:13:39.404 }, 00:13:39.404 "memory_domains": [ 00:13:39.404 { 00:13:39.405 "dma_device_id": "system", 00:13:39.405 "dma_device_type": 1 00:13:39.405 }, 00:13:39.405 { 00:13:39.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.405 "dma_device_type": 2 00:13:39.405 }, 00:13:39.405 { 00:13:39.405 "dma_device_id": "system", 00:13:39.405 "dma_device_type": 1 00:13:39.405 }, 00:13:39.405 { 00:13:39.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.405 "dma_device_type": 2 00:13:39.405 }, 00:13:39.405 { 00:13:39.405 "dma_device_id": "system", 00:13:39.405 "dma_device_type": 1 00:13:39.405 }, 00:13:39.405 { 00:13:39.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.405 "dma_device_type": 2 00:13:39.405 }, 00:13:39.405 { 00:13:39.405 "dma_device_id": "system", 00:13:39.405 "dma_device_type": 1 00:13:39.405 }, 00:13:39.405 { 00:13:39.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.405 "dma_device_type": 2 00:13:39.405 } 00:13:39.405 ], 00:13:39.405 "driver_specific": { 00:13:39.405 "raid": { 00:13:39.405 "uuid": "373f8761-8b21-466d-96e5-9eb9664755a4", 00:13:39.405 "strip_size_kb": 64, 00:13:39.405 "state": "online", 00:13:39.405 "raid_level": "raid0", 00:13:39.405 "superblock": true, 00:13:39.405 "num_base_bdevs": 4, 00:13:39.405 "num_base_bdevs_discovered": 4, 00:13:39.405 "num_base_bdevs_operational": 4, 00:13:39.405 "base_bdevs_list": [ 00:13:39.405 { 00:13:39.405 "name": "NewBaseBdev", 00:13:39.405 "uuid": "326ae1e9-10ec-431e-8849-316ad786f5bd", 00:13:39.405 "is_configured": true, 00:13:39.405 "data_offset": 2048, 00:13:39.405 "data_size": 63488 00:13:39.405 }, 00:13:39.405 { 00:13:39.405 "name": "BaseBdev2", 00:13:39.405 "uuid": "816cc8bb-dbf1-455f-95f8-884753e44f9a", 00:13:39.405 "is_configured": true, 00:13:39.405 "data_offset": 2048, 00:13:39.405 "data_size": 63488 00:13:39.405 }, 00:13:39.405 { 00:13:39.405 "name": "BaseBdev3", 00:13:39.405 "uuid": "a5a74b2c-b447-4667-a3da-c4220f7819aa", 00:13:39.405 "is_configured": true, 00:13:39.405 "data_offset": 2048, 00:13:39.405 "data_size": 63488 00:13:39.405 }, 00:13:39.405 { 00:13:39.405 "name": "BaseBdev4", 00:13:39.405 "uuid": "b41772c1-2fa9-4022-a798-a26196d379f4", 00:13:39.405 "is_configured": true, 00:13:39.405 "data_offset": 2048, 00:13:39.405 "data_size": 63488 00:13:39.405 } 00:13:39.405 ] 00:13:39.405 } 00:13:39.405 } 00:13:39.405 }' 00:13:39.405 18:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:39.405 18:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:39.405 BaseBdev2 00:13:39.405 BaseBdev3 00:13:39.405 BaseBdev4' 00:13:39.405 18:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:39.405 18:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:39.405 18:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:39.405 18:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:39.405 18:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.405 18:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.405 18:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:39.405 18:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.405 18:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:39.405 18:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:39.405 18:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:39.405 18:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:39.405 18:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:39.405 18:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.405 18:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.405 18:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.405 18:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:39.405 18:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:39.405 18:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:39.405 18:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:39.405 18:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.405 18:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.665 18:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:39.665 18:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.665 18:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:39.665 18:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:39.665 18:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:39.665 18:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:39.665 18:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.665 18:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:39.665 18:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.665 18:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.665 18:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:39.665 18:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:39.665 18:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:39.665 18:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.665 18:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.665 [2024-12-06 18:12:05.027661] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:39.665 [2024-12-06 18:12:05.027867] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:39.665 [2024-12-06 18:12:05.028067] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:39.665 [2024-12-06 18:12:05.028260] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:39.665 [2024-12-06 18:12:05.028373] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:39.665 18:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.665 18:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70196 00:13:39.665 18:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70196 ']' 00:13:39.665 18:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70196 00:13:39.665 18:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:39.665 18:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:39.665 18:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70196 00:13:39.665 killing process with pid 70196 00:13:39.665 18:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:39.665 18:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:39.665 18:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70196' 00:13:39.665 18:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70196 00:13:39.665 [2024-12-06 18:12:05.068114] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:39.665 18:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70196 00:13:39.923 [2024-12-06 18:12:05.416098] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:41.348 18:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:41.348 00:13:41.348 real 0m13.106s 00:13:41.348 user 0m21.880s 00:13:41.348 sys 0m1.750s 00:13:41.348 18:12:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:41.348 18:12:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.348 ************************************ 00:13:41.348 END TEST raid_state_function_test_sb 00:13:41.348 ************************************ 00:13:41.348 18:12:06 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:13:41.348 18:12:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:41.348 18:12:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:41.348 18:12:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:41.348 ************************************ 00:13:41.348 START TEST raid_superblock_test 00:13:41.348 ************************************ 00:13:41.348 18:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:13:41.348 18:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:13:41.349 18:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:13:41.349 18:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:41.349 18:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:41.349 18:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:41.349 18:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:41.349 18:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:41.349 18:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:41.349 18:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:41.349 18:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:41.349 18:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:41.349 18:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:41.349 18:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:41.349 18:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:13:41.349 18:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:41.349 18:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:41.349 18:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70887 00:13:41.349 18:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:41.349 18:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70887 00:13:41.349 18:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70887 ']' 00:13:41.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:41.349 18:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:41.349 18:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:41.349 18:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:41.349 18:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:41.349 18:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.349 [2024-12-06 18:12:06.663701] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:13:41.349 [2024-12-06 18:12:06.664144] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70887 ] 00:13:41.349 [2024-12-06 18:12:06.846607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.607 [2024-12-06 18:12:06.975597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.865 [2024-12-06 18:12:07.182177] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:41.865 [2024-12-06 18:12:07.182229] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:42.433 18:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:42.433 18:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:13:42.433 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:42.433 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:42.433 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:42.433 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:42.433 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:42.433 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:42.433 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:42.433 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:42.433 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:42.433 18:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.433 18:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.433 malloc1 00:13:42.433 18:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.433 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:42.433 18:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.433 18:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.433 [2024-12-06 18:12:07.710860] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:42.433 [2024-12-06 18:12:07.711169] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.433 [2024-12-06 18:12:07.711247] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:42.433 [2024-12-06 18:12:07.711509] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.433 [2024-12-06 18:12:07.714301] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.433 pt1 00:13:42.433 [2024-12-06 18:12:07.714509] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:42.433 18:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.433 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:42.433 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:42.433 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:42.433 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:42.433 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:42.433 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:42.433 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:42.433 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:42.433 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:42.433 18:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.433 18:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.433 malloc2 00:13:42.433 18:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.433 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:42.433 18:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.433 18:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.433 [2024-12-06 18:12:07.760832] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:42.433 [2024-12-06 18:12:07.761076] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.433 [2024-12-06 18:12:07.761126] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:42.433 [2024-12-06 18:12:07.761143] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.433 [2024-12-06 18:12:07.764141] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.433 pt2 00:13:42.433 [2024-12-06 18:12:07.764340] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:42.433 18:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.433 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:42.433 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:42.433 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:42.433 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:42.433 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:42.433 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:42.433 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:42.433 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:42.433 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:42.433 18:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.433 18:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.433 malloc3 00:13:42.433 18:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.433 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:42.433 18:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.433 18:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.433 [2024-12-06 18:12:07.818121] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:42.433 [2024-12-06 18:12:07.818337] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.433 [2024-12-06 18:12:07.818415] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:42.433 [2024-12-06 18:12:07.818582] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.433 [2024-12-06 18:12:07.821524] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.433 [2024-12-06 18:12:07.821579] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:42.433 pt3 00:13:42.433 18:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.434 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:42.434 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:42.434 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:13:42.434 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:13:42.434 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:13:42.434 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:42.434 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:42.434 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:42.434 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:13:42.434 18:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.434 18:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.434 malloc4 00:13:42.434 18:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.434 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:42.434 18:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.434 18:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.434 [2024-12-06 18:12:07.867244] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:42.434 [2024-12-06 18:12:07.867448] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.434 [2024-12-06 18:12:07.867523] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:42.434 [2024-12-06 18:12:07.867665] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.434 [2024-12-06 18:12:07.870489] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.434 [2024-12-06 18:12:07.870642] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:42.434 pt4 00:13:42.434 18:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.434 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:42.434 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:42.434 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:13:42.434 18:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.434 18:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.434 [2024-12-06 18:12:07.879499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:42.434 [2024-12-06 18:12:07.881964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:42.434 [2024-12-06 18:12:07.882205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:42.434 [2024-12-06 18:12:07.882396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:42.434 [2024-12-06 18:12:07.882777] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:42.434 [2024-12-06 18:12:07.882903] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:42.434 [2024-12-06 18:12:07.883391] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:42.434 [2024-12-06 18:12:07.883742] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:42.434 [2024-12-06 18:12:07.883798] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:42.434 [2024-12-06 18:12:07.884031] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.434 18:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.434 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:42.434 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:42.434 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.434 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:42.434 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:42.434 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:42.434 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.434 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.434 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.434 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.434 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.434 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.434 18:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.434 18:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.434 18:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.434 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.434 "name": "raid_bdev1", 00:13:42.434 "uuid": "bb04f3d8-d01c-4956-a69c-35f06d5bea4a", 00:13:42.434 "strip_size_kb": 64, 00:13:42.434 "state": "online", 00:13:42.434 "raid_level": "raid0", 00:13:42.434 "superblock": true, 00:13:42.434 "num_base_bdevs": 4, 00:13:42.434 "num_base_bdevs_discovered": 4, 00:13:42.434 "num_base_bdevs_operational": 4, 00:13:42.434 "base_bdevs_list": [ 00:13:42.434 { 00:13:42.434 "name": "pt1", 00:13:42.434 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:42.434 "is_configured": true, 00:13:42.434 "data_offset": 2048, 00:13:42.434 "data_size": 63488 00:13:42.434 }, 00:13:42.434 { 00:13:42.434 "name": "pt2", 00:13:42.434 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:42.434 "is_configured": true, 00:13:42.434 "data_offset": 2048, 00:13:42.434 "data_size": 63488 00:13:42.434 }, 00:13:42.434 { 00:13:42.434 "name": "pt3", 00:13:42.434 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:42.434 "is_configured": true, 00:13:42.434 "data_offset": 2048, 00:13:42.434 "data_size": 63488 00:13:42.434 }, 00:13:42.434 { 00:13:42.434 "name": "pt4", 00:13:42.434 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:42.434 "is_configured": true, 00:13:42.434 "data_offset": 2048, 00:13:42.434 "data_size": 63488 00:13:42.434 } 00:13:42.434 ] 00:13:42.434 }' 00:13:42.434 18:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.434 18:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.003 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:43.003 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:43.003 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:43.003 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:43.003 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:43.003 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:43.003 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:43.003 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:43.003 18:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.003 18:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.003 [2024-12-06 18:12:08.416591] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:43.003 18:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.003 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:43.003 "name": "raid_bdev1", 00:13:43.003 "aliases": [ 00:13:43.003 "bb04f3d8-d01c-4956-a69c-35f06d5bea4a" 00:13:43.003 ], 00:13:43.003 "product_name": "Raid Volume", 00:13:43.003 "block_size": 512, 00:13:43.003 "num_blocks": 253952, 00:13:43.003 "uuid": "bb04f3d8-d01c-4956-a69c-35f06d5bea4a", 00:13:43.003 "assigned_rate_limits": { 00:13:43.003 "rw_ios_per_sec": 0, 00:13:43.003 "rw_mbytes_per_sec": 0, 00:13:43.003 "r_mbytes_per_sec": 0, 00:13:43.003 "w_mbytes_per_sec": 0 00:13:43.003 }, 00:13:43.003 "claimed": false, 00:13:43.003 "zoned": false, 00:13:43.003 "supported_io_types": { 00:13:43.003 "read": true, 00:13:43.003 "write": true, 00:13:43.003 "unmap": true, 00:13:43.003 "flush": true, 00:13:43.003 "reset": true, 00:13:43.003 "nvme_admin": false, 00:13:43.003 "nvme_io": false, 00:13:43.003 "nvme_io_md": false, 00:13:43.003 "write_zeroes": true, 00:13:43.003 "zcopy": false, 00:13:43.003 "get_zone_info": false, 00:13:43.003 "zone_management": false, 00:13:43.003 "zone_append": false, 00:13:43.003 "compare": false, 00:13:43.003 "compare_and_write": false, 00:13:43.003 "abort": false, 00:13:43.003 "seek_hole": false, 00:13:43.003 "seek_data": false, 00:13:43.003 "copy": false, 00:13:43.003 "nvme_iov_md": false 00:13:43.003 }, 00:13:43.003 "memory_domains": [ 00:13:43.003 { 00:13:43.003 "dma_device_id": "system", 00:13:43.003 "dma_device_type": 1 00:13:43.003 }, 00:13:43.003 { 00:13:43.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.003 "dma_device_type": 2 00:13:43.003 }, 00:13:43.003 { 00:13:43.003 "dma_device_id": "system", 00:13:43.003 "dma_device_type": 1 00:13:43.003 }, 00:13:43.003 { 00:13:43.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.003 "dma_device_type": 2 00:13:43.003 }, 00:13:43.003 { 00:13:43.003 "dma_device_id": "system", 00:13:43.003 "dma_device_type": 1 00:13:43.003 }, 00:13:43.003 { 00:13:43.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.003 "dma_device_type": 2 00:13:43.003 }, 00:13:43.003 { 00:13:43.003 "dma_device_id": "system", 00:13:43.003 "dma_device_type": 1 00:13:43.003 }, 00:13:43.003 { 00:13:43.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.003 "dma_device_type": 2 00:13:43.003 } 00:13:43.003 ], 00:13:43.003 "driver_specific": { 00:13:43.003 "raid": { 00:13:43.003 "uuid": "bb04f3d8-d01c-4956-a69c-35f06d5bea4a", 00:13:43.003 "strip_size_kb": 64, 00:13:43.003 "state": "online", 00:13:43.003 "raid_level": "raid0", 00:13:43.003 "superblock": true, 00:13:43.003 "num_base_bdevs": 4, 00:13:43.003 "num_base_bdevs_discovered": 4, 00:13:43.003 "num_base_bdevs_operational": 4, 00:13:43.003 "base_bdevs_list": [ 00:13:43.003 { 00:13:43.003 "name": "pt1", 00:13:43.003 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:43.003 "is_configured": true, 00:13:43.003 "data_offset": 2048, 00:13:43.003 "data_size": 63488 00:13:43.003 }, 00:13:43.003 { 00:13:43.003 "name": "pt2", 00:13:43.003 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:43.003 "is_configured": true, 00:13:43.003 "data_offset": 2048, 00:13:43.003 "data_size": 63488 00:13:43.003 }, 00:13:43.003 { 00:13:43.003 "name": "pt3", 00:13:43.003 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:43.003 "is_configured": true, 00:13:43.003 "data_offset": 2048, 00:13:43.003 "data_size": 63488 00:13:43.003 }, 00:13:43.003 { 00:13:43.003 "name": "pt4", 00:13:43.003 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:43.003 "is_configured": true, 00:13:43.003 "data_offset": 2048, 00:13:43.003 "data_size": 63488 00:13:43.003 } 00:13:43.003 ] 00:13:43.003 } 00:13:43.003 } 00:13:43.003 }' 00:13:43.003 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:43.003 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:43.003 pt2 00:13:43.003 pt3 00:13:43.003 pt4' 00:13:43.003 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.262 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:43.262 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:43.262 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:43.262 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.262 18:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.262 18:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.262 18:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.262 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:43.262 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:43.262 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:43.262 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:43.262 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.262 18:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.262 18:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.262 18:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.262 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:43.262 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:43.262 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:43.262 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:43.262 18:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.262 18:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.262 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.262 18:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.262 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:43.262 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:43.262 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:43.262 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:43.262 18:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.262 18:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.262 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.262 18:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.521 [2024-12-06 18:12:08.796647] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=bb04f3d8-d01c-4956-a69c-35f06d5bea4a 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z bb04f3d8-d01c-4956-a69c-35f06d5bea4a ']' 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.521 [2024-12-06 18:12:08.844241] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:43.521 [2024-12-06 18:12:08.844451] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:43.521 [2024-12-06 18:12:08.844645] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:43.521 [2024-12-06 18:12:08.844760] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:43.521 [2024-12-06 18:12:08.844798] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.521 18:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.521 [2024-12-06 18:12:09.004379] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:43.521 [2024-12-06 18:12:09.007040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:43.521 [2024-12-06 18:12:09.007231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:43.521 [2024-12-06 18:12:09.007335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:13:43.521 [2024-12-06 18:12:09.007468] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:43.521 [2024-12-06 18:12:09.007674] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:43.521 [2024-12-06 18:12:09.007872] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:43.521 [2024-12-06 18:12:09.008077] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raidrequest: 00:13:43.521 { 00:13:43.521 bdev found on bdev malloc4 00:13:43.521 [2024-12-06 18:12:09.008291] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:43.521 [2024-12-06 18:12:09.008320] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:43.521 "name": "raid_bdev1", 00:13:43.521 "raid_level": "raid0", 00:13:43.521 "base_bdevs": [ 00:13:43.521 "malloc1", 00:13:43.521 "malloc2", 00:13:43.521 "malloc3", 00:13:43.521 "malloc4" 00:13:43.521 ], 00:13:43.521 "strip_size_kb": 64, 00:13:43.521 "superblock": false, 00:13:43.521 "method": "bdev_raid_create", 00:13:43.521 "req_id": 1 00:13:43.521 } 00:13:43.521 Got JSON-RPC error response 00:13:43.521 response: 00:13:43.521 { 00:13:43.521 "code": -17, 00:13:43.521 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:43.521 } 00:13:43.521 18:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:43.521 18:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:13:43.521 18:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:43.521 18:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:43.521 18:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:43.521 18:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:43.521 18:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.521 18:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.521 18:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.521 18:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.780 18:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:43.780 18:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:43.780 18:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:43.780 18:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.780 18:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.780 [2024-12-06 18:12:09.072626] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:43.780 [2024-12-06 18:12:09.072849] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:43.780 [2024-12-06 18:12:09.072931] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:43.780 [2024-12-06 18:12:09.073138] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:43.780 [2024-12-06 18:12:09.076088] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:43.780 [2024-12-06 18:12:09.076252] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:43.780 [2024-12-06 18:12:09.076457] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:43.780 [2024-12-06 18:12:09.076632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:43.780 pt1 00:13:43.780 18:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.780 18:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:13:43.780 18:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.780 18:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:43.780 18:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:43.780 18:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:43.781 18:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:43.781 18:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.781 18:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.781 18:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.781 18:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.781 18:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.781 18:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.781 18:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.781 18:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.781 18:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.781 18:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.781 "name": "raid_bdev1", 00:13:43.781 "uuid": "bb04f3d8-d01c-4956-a69c-35f06d5bea4a", 00:13:43.781 "strip_size_kb": 64, 00:13:43.781 "state": "configuring", 00:13:43.781 "raid_level": "raid0", 00:13:43.781 "superblock": true, 00:13:43.781 "num_base_bdevs": 4, 00:13:43.781 "num_base_bdevs_discovered": 1, 00:13:43.781 "num_base_bdevs_operational": 4, 00:13:43.781 "base_bdevs_list": [ 00:13:43.781 { 00:13:43.781 "name": "pt1", 00:13:43.781 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:43.781 "is_configured": true, 00:13:43.781 "data_offset": 2048, 00:13:43.781 "data_size": 63488 00:13:43.781 }, 00:13:43.781 { 00:13:43.781 "name": null, 00:13:43.781 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:43.781 "is_configured": false, 00:13:43.781 "data_offset": 2048, 00:13:43.781 "data_size": 63488 00:13:43.781 }, 00:13:43.781 { 00:13:43.781 "name": null, 00:13:43.781 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:43.781 "is_configured": false, 00:13:43.781 "data_offset": 2048, 00:13:43.781 "data_size": 63488 00:13:43.781 }, 00:13:43.781 { 00:13:43.781 "name": null, 00:13:43.781 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:43.781 "is_configured": false, 00:13:43.781 "data_offset": 2048, 00:13:43.781 "data_size": 63488 00:13:43.781 } 00:13:43.781 ] 00:13:43.781 }' 00:13:43.781 18:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.781 18:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.361 18:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:13:44.361 18:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:44.361 18:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.361 18:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.361 [2024-12-06 18:12:09.589240] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:44.361 [2024-12-06 18:12:09.590268] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:44.361 [2024-12-06 18:12:09.590309] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:44.361 [2024-12-06 18:12:09.590329] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:44.361 [2024-12-06 18:12:09.590952] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:44.361 [2024-12-06 18:12:09.590989] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:44.361 [2024-12-06 18:12:09.591095] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:44.361 [2024-12-06 18:12:09.591132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:44.361 pt2 00:13:44.361 18:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.361 18:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:44.361 18:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.361 18:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.361 [2024-12-06 18:12:09.597193] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:44.361 18:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.361 18:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:13:44.361 18:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:44.361 18:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:44.361 18:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:44.361 18:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:44.361 18:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:44.361 18:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.361 18:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.361 18:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.361 18:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.361 18:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.361 18:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.361 18:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.361 18:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.361 18:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.361 18:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.361 "name": "raid_bdev1", 00:13:44.361 "uuid": "bb04f3d8-d01c-4956-a69c-35f06d5bea4a", 00:13:44.361 "strip_size_kb": 64, 00:13:44.361 "state": "configuring", 00:13:44.361 "raid_level": "raid0", 00:13:44.361 "superblock": true, 00:13:44.361 "num_base_bdevs": 4, 00:13:44.361 "num_base_bdevs_discovered": 1, 00:13:44.361 "num_base_bdevs_operational": 4, 00:13:44.361 "base_bdevs_list": [ 00:13:44.361 { 00:13:44.361 "name": "pt1", 00:13:44.361 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:44.361 "is_configured": true, 00:13:44.361 "data_offset": 2048, 00:13:44.361 "data_size": 63488 00:13:44.361 }, 00:13:44.361 { 00:13:44.361 "name": null, 00:13:44.361 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:44.361 "is_configured": false, 00:13:44.361 "data_offset": 0, 00:13:44.361 "data_size": 63488 00:13:44.361 }, 00:13:44.361 { 00:13:44.361 "name": null, 00:13:44.361 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:44.361 "is_configured": false, 00:13:44.361 "data_offset": 2048, 00:13:44.361 "data_size": 63488 00:13:44.361 }, 00:13:44.361 { 00:13:44.361 "name": null, 00:13:44.361 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:44.361 "is_configured": false, 00:13:44.361 "data_offset": 2048, 00:13:44.361 "data_size": 63488 00:13:44.361 } 00:13:44.361 ] 00:13:44.361 }' 00:13:44.361 18:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.361 18:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.648 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:44.648 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:44.648 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:44.648 18:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.648 18:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.648 [2024-12-06 18:12:10.137358] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:44.648 [2024-12-06 18:12:10.137563] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:44.648 [2024-12-06 18:12:10.137638] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:44.648 [2024-12-06 18:12:10.137875] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:44.648 [2024-12-06 18:12:10.138457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:44.648 [2024-12-06 18:12:10.138483] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:44.648 [2024-12-06 18:12:10.138588] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:44.648 [2024-12-06 18:12:10.138620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:44.648 pt2 00:13:44.648 18:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.648 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:44.648 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:44.648 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:44.648 18:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.648 18:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.649 [2024-12-06 18:12:10.145293] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:44.649 [2024-12-06 18:12:10.145563] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:44.649 [2024-12-06 18:12:10.145632] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:44.649 [2024-12-06 18:12:10.145775] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:44.649 [2024-12-06 18:12:10.146307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:44.649 [2024-12-06 18:12:10.146489] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:44.649 [2024-12-06 18:12:10.146697] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:44.649 [2024-12-06 18:12:10.146870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:44.649 pt3 00:13:44.649 18:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.649 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:44.649 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:44.649 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:44.649 18:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.649 18:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.649 [2024-12-06 18:12:10.153287] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:44.649 [2024-12-06 18:12:10.153515] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:44.649 [2024-12-06 18:12:10.153553] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:44.649 [2024-12-06 18:12:10.153569] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:44.649 [2024-12-06 18:12:10.154051] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:44.649 [2024-12-06 18:12:10.154086] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:44.649 [2024-12-06 18:12:10.154196] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:44.649 [2024-12-06 18:12:10.154228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:44.649 [2024-12-06 18:12:10.154448] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:44.649 [2024-12-06 18:12:10.154470] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:44.649 [2024-12-06 18:12:10.154800] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:44.649 [2024-12-06 18:12:10.154989] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:44.649 [2024-12-06 18:12:10.155011] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:44.649 [2024-12-06 18:12:10.155166] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:44.649 pt4 00:13:44.649 18:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.649 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:44.649 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:44.649 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:44.649 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:44.649 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:44.649 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:44.649 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:44.649 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:44.649 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.649 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.649 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.649 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.649 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.649 18:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.649 18:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.649 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.907 18:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.907 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.907 "name": "raid_bdev1", 00:13:44.907 "uuid": "bb04f3d8-d01c-4956-a69c-35f06d5bea4a", 00:13:44.907 "strip_size_kb": 64, 00:13:44.907 "state": "online", 00:13:44.907 "raid_level": "raid0", 00:13:44.907 "superblock": true, 00:13:44.907 "num_base_bdevs": 4, 00:13:44.907 "num_base_bdevs_discovered": 4, 00:13:44.907 "num_base_bdevs_operational": 4, 00:13:44.907 "base_bdevs_list": [ 00:13:44.907 { 00:13:44.907 "name": "pt1", 00:13:44.907 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:44.907 "is_configured": true, 00:13:44.907 "data_offset": 2048, 00:13:44.907 "data_size": 63488 00:13:44.907 }, 00:13:44.907 { 00:13:44.907 "name": "pt2", 00:13:44.907 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:44.907 "is_configured": true, 00:13:44.907 "data_offset": 2048, 00:13:44.907 "data_size": 63488 00:13:44.907 }, 00:13:44.907 { 00:13:44.907 "name": "pt3", 00:13:44.907 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:44.907 "is_configured": true, 00:13:44.907 "data_offset": 2048, 00:13:44.907 "data_size": 63488 00:13:44.907 }, 00:13:44.907 { 00:13:44.907 "name": "pt4", 00:13:44.907 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:44.907 "is_configured": true, 00:13:44.907 "data_offset": 2048, 00:13:44.907 "data_size": 63488 00:13:44.907 } 00:13:44.907 ] 00:13:44.907 }' 00:13:44.907 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.907 18:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.166 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:45.166 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:45.166 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:45.166 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:45.166 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:45.166 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:45.166 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:45.166 18:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.166 18:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.166 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:45.166 [2024-12-06 18:12:10.678194] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:45.425 18:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.425 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:45.425 "name": "raid_bdev1", 00:13:45.425 "aliases": [ 00:13:45.425 "bb04f3d8-d01c-4956-a69c-35f06d5bea4a" 00:13:45.425 ], 00:13:45.425 "product_name": "Raid Volume", 00:13:45.425 "block_size": 512, 00:13:45.425 "num_blocks": 253952, 00:13:45.425 "uuid": "bb04f3d8-d01c-4956-a69c-35f06d5bea4a", 00:13:45.425 "assigned_rate_limits": { 00:13:45.425 "rw_ios_per_sec": 0, 00:13:45.425 "rw_mbytes_per_sec": 0, 00:13:45.425 "r_mbytes_per_sec": 0, 00:13:45.425 "w_mbytes_per_sec": 0 00:13:45.425 }, 00:13:45.425 "claimed": false, 00:13:45.425 "zoned": false, 00:13:45.425 "supported_io_types": { 00:13:45.425 "read": true, 00:13:45.425 "write": true, 00:13:45.425 "unmap": true, 00:13:45.425 "flush": true, 00:13:45.425 "reset": true, 00:13:45.425 "nvme_admin": false, 00:13:45.425 "nvme_io": false, 00:13:45.425 "nvme_io_md": false, 00:13:45.425 "write_zeroes": true, 00:13:45.425 "zcopy": false, 00:13:45.425 "get_zone_info": false, 00:13:45.425 "zone_management": false, 00:13:45.425 "zone_append": false, 00:13:45.425 "compare": false, 00:13:45.425 "compare_and_write": false, 00:13:45.425 "abort": false, 00:13:45.425 "seek_hole": false, 00:13:45.425 "seek_data": false, 00:13:45.425 "copy": false, 00:13:45.425 "nvme_iov_md": false 00:13:45.425 }, 00:13:45.425 "memory_domains": [ 00:13:45.425 { 00:13:45.425 "dma_device_id": "system", 00:13:45.425 "dma_device_type": 1 00:13:45.425 }, 00:13:45.425 { 00:13:45.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.425 "dma_device_type": 2 00:13:45.425 }, 00:13:45.425 { 00:13:45.425 "dma_device_id": "system", 00:13:45.425 "dma_device_type": 1 00:13:45.425 }, 00:13:45.425 { 00:13:45.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.425 "dma_device_type": 2 00:13:45.425 }, 00:13:45.425 { 00:13:45.425 "dma_device_id": "system", 00:13:45.425 "dma_device_type": 1 00:13:45.425 }, 00:13:45.425 { 00:13:45.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.425 "dma_device_type": 2 00:13:45.425 }, 00:13:45.425 { 00:13:45.425 "dma_device_id": "system", 00:13:45.425 "dma_device_type": 1 00:13:45.425 }, 00:13:45.425 { 00:13:45.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.425 "dma_device_type": 2 00:13:45.425 } 00:13:45.425 ], 00:13:45.425 "driver_specific": { 00:13:45.425 "raid": { 00:13:45.425 "uuid": "bb04f3d8-d01c-4956-a69c-35f06d5bea4a", 00:13:45.425 "strip_size_kb": 64, 00:13:45.425 "state": "online", 00:13:45.425 "raid_level": "raid0", 00:13:45.425 "superblock": true, 00:13:45.425 "num_base_bdevs": 4, 00:13:45.425 "num_base_bdevs_discovered": 4, 00:13:45.425 "num_base_bdevs_operational": 4, 00:13:45.425 "base_bdevs_list": [ 00:13:45.425 { 00:13:45.425 "name": "pt1", 00:13:45.425 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:45.425 "is_configured": true, 00:13:45.425 "data_offset": 2048, 00:13:45.425 "data_size": 63488 00:13:45.425 }, 00:13:45.425 { 00:13:45.425 "name": "pt2", 00:13:45.425 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:45.425 "is_configured": true, 00:13:45.425 "data_offset": 2048, 00:13:45.425 "data_size": 63488 00:13:45.425 }, 00:13:45.425 { 00:13:45.425 "name": "pt3", 00:13:45.425 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:45.425 "is_configured": true, 00:13:45.425 "data_offset": 2048, 00:13:45.425 "data_size": 63488 00:13:45.425 }, 00:13:45.425 { 00:13:45.425 "name": "pt4", 00:13:45.425 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:45.425 "is_configured": true, 00:13:45.425 "data_offset": 2048, 00:13:45.425 "data_size": 63488 00:13:45.425 } 00:13:45.425 ] 00:13:45.425 } 00:13:45.425 } 00:13:45.425 }' 00:13:45.425 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:45.425 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:45.425 pt2 00:13:45.425 pt3 00:13:45.425 pt4' 00:13:45.425 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:45.425 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:45.425 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:45.425 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:45.425 18:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.425 18:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.425 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:45.425 18:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.425 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:45.425 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:45.425 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:45.425 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:45.425 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:45.425 18:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.425 18:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.425 18:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.684 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:45.684 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:45.684 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:45.684 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:45.684 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:45.684 18:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.684 18:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.684 18:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.685 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:45.685 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:45.685 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:45.685 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:45.685 18:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:45.685 18:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.685 18:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.685 18:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.685 18:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:45.685 18:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:45.685 18:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:45.685 18:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:45.685 18:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.685 18:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.685 [2024-12-06 18:12:11.058216] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:45.685 18:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.685 18:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' bb04f3d8-d01c-4956-a69c-35f06d5bea4a '!=' bb04f3d8-d01c-4956-a69c-35f06d5bea4a ']' 00:13:45.685 18:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:13:45.685 18:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:45.685 18:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:45.685 18:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70887 00:13:45.685 18:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70887 ']' 00:13:45.685 18:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70887 00:13:45.685 18:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:13:45.685 18:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:45.685 18:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70887 00:13:45.685 killing process with pid 70887 00:13:45.685 18:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:45.685 18:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:45.685 18:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70887' 00:13:45.685 18:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70887 00:13:45.685 [2024-12-06 18:12:11.135216] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:45.685 18:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70887 00:13:45.685 [2024-12-06 18:12:11.135311] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:45.685 [2024-12-06 18:12:11.135407] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:45.685 [2024-12-06 18:12:11.135423] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:46.251 [2024-12-06 18:12:11.491681] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:47.186 18:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:47.186 00:13:47.186 real 0m6.004s 00:13:47.186 user 0m9.068s 00:13:47.186 sys 0m0.868s 00:13:47.186 18:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:47.186 18:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.186 ************************************ 00:13:47.186 END TEST raid_superblock_test 00:13:47.186 ************************************ 00:13:47.186 18:12:12 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:13:47.186 18:12:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:47.186 18:12:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:47.186 18:12:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:47.186 ************************************ 00:13:47.186 START TEST raid_read_error_test 00:13:47.186 ************************************ 00:13:47.186 18:12:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:13:47.186 18:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:13:47.186 18:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:47.186 18:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:47.186 18:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:47.186 18:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:47.186 18:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:47.186 18:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:47.186 18:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:47.186 18:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:47.186 18:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:47.186 18:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:47.186 18:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:47.186 18:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:47.186 18:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:47.186 18:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:47.186 18:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:47.186 18:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:47.186 18:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:47.186 18:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:47.186 18:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:47.186 18:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:47.186 18:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:47.186 18:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:47.186 18:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:47.186 18:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:13:47.186 18:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:47.186 18:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:47.186 18:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:47.186 18:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Qz0BhQu2By 00:13:47.186 18:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71157 00:13:47.186 18:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71157 00:13:47.186 18:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:47.186 18:12:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 71157 ']' 00:13:47.186 18:12:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.186 18:12:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:47.186 18:12:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.186 18:12:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:47.186 18:12:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.444 [2024-12-06 18:12:12.734150] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:13:47.444 [2024-12-06 18:12:12.734316] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71157 ] 00:13:47.444 [2024-12-06 18:12:12.909059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.702 [2024-12-06 18:12:13.036912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.958 [2024-12-06 18:12:13.243183] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:47.958 [2024-12-06 18:12:13.243238] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:48.523 18:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:48.523 18:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:48.523 18:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:48.523 18:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:48.523 18:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.523 18:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.523 BaseBdev1_malloc 00:13:48.523 18:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.523 18:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:48.523 18:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.523 18:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.523 true 00:13:48.523 18:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.523 18:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:48.523 18:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.523 18:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.523 [2024-12-06 18:12:13.823745] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:48.523 [2024-12-06 18:12:13.823965] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:48.523 [2024-12-06 18:12:13.824039] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:48.523 [2024-12-06 18:12:13.824215] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:48.523 [2024-12-06 18:12:13.827062] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:48.523 [2024-12-06 18:12:13.827114] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:48.523 BaseBdev1 00:13:48.523 18:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.523 18:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:48.523 18:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:48.523 18:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.523 18:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.523 BaseBdev2_malloc 00:13:48.523 18:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.523 18:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:48.523 18:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.523 18:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.523 true 00:13:48.523 18:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.523 18:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:48.523 18:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.523 18:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.523 [2024-12-06 18:12:13.885082] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:48.523 [2024-12-06 18:12:13.885280] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:48.523 [2024-12-06 18:12:13.885351] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:48.523 [2024-12-06 18:12:13.885462] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:48.523 [2024-12-06 18:12:13.888444] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:48.523 [2024-12-06 18:12:13.888510] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:48.523 BaseBdev2 00:13:48.523 18:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.523 18:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:48.523 18:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:48.523 18:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.523 18:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.523 BaseBdev3_malloc 00:13:48.523 18:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.523 18:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:48.523 18:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.523 18:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.523 true 00:13:48.523 18:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.523 18:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:48.523 18:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.523 18:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.523 [2024-12-06 18:12:13.956398] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:48.523 [2024-12-06 18:12:13.956650] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:48.523 [2024-12-06 18:12:13.956687] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:48.523 [2024-12-06 18:12:13.956706] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:48.523 [2024-12-06 18:12:13.959602] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:48.523 [2024-12-06 18:12:13.959831] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:48.523 BaseBdev3 00:13:48.523 18:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.523 18:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:48.523 18:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:48.523 18:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.523 18:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.523 BaseBdev4_malloc 00:13:48.523 18:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.523 18:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:48.523 18:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.523 18:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.523 true 00:13:48.523 18:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.523 18:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:48.523 18:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.523 18:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.523 [2024-12-06 18:12:14.017752] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:48.523 [2024-12-06 18:12:14.018042] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:48.523 [2024-12-06 18:12:14.018080] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:48.523 [2024-12-06 18:12:14.018100] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:48.523 [2024-12-06 18:12:14.021072] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:48.523 [2024-12-06 18:12:14.021157] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:48.523 BaseBdev4 00:13:48.523 18:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.523 18:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:48.523 18:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.523 18:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.523 [2024-12-06 18:12:14.026063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:48.523 [2024-12-06 18:12:14.028768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:48.523 [2024-12-06 18:12:14.029031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:48.523 [2024-12-06 18:12:14.029264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:48.523 [2024-12-06 18:12:14.029680] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:48.523 [2024-12-06 18:12:14.029841] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:48.523 [2024-12-06 18:12:14.030261] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:48.523 [2024-12-06 18:12:14.030707] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:48.523 [2024-12-06 18:12:14.030848] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:48.523 [2024-12-06 18:12:14.031204] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:48.523 18:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.523 18:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:48.523 18:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:48.523 18:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:48.523 18:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:48.523 18:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:48.523 18:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:48.523 18:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.523 18:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.523 18:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.523 18:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.523 18:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.523 18:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.523 18:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.523 18:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.781 18:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.781 18:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.781 "name": "raid_bdev1", 00:13:48.781 "uuid": "d1cd8cca-0f32-4cb3-b41d-05eac1ed8067", 00:13:48.781 "strip_size_kb": 64, 00:13:48.781 "state": "online", 00:13:48.781 "raid_level": "raid0", 00:13:48.781 "superblock": true, 00:13:48.781 "num_base_bdevs": 4, 00:13:48.781 "num_base_bdevs_discovered": 4, 00:13:48.781 "num_base_bdevs_operational": 4, 00:13:48.781 "base_bdevs_list": [ 00:13:48.781 { 00:13:48.781 "name": "BaseBdev1", 00:13:48.781 "uuid": "2a5824ec-f7f8-57f1-9b9c-999f7a603913", 00:13:48.781 "is_configured": true, 00:13:48.781 "data_offset": 2048, 00:13:48.781 "data_size": 63488 00:13:48.781 }, 00:13:48.781 { 00:13:48.781 "name": "BaseBdev2", 00:13:48.781 "uuid": "86543939-6d66-5489-b071-5cf6225da04f", 00:13:48.781 "is_configured": true, 00:13:48.781 "data_offset": 2048, 00:13:48.781 "data_size": 63488 00:13:48.781 }, 00:13:48.781 { 00:13:48.781 "name": "BaseBdev3", 00:13:48.781 "uuid": "d57f51d9-6fc6-57b5-adc4-5ce45a0efcd7", 00:13:48.781 "is_configured": true, 00:13:48.781 "data_offset": 2048, 00:13:48.781 "data_size": 63488 00:13:48.781 }, 00:13:48.781 { 00:13:48.781 "name": "BaseBdev4", 00:13:48.781 "uuid": "33b9fe89-9871-514d-a018-ae2ed82dff8f", 00:13:48.781 "is_configured": true, 00:13:48.781 "data_offset": 2048, 00:13:48.781 "data_size": 63488 00:13:48.781 } 00:13:48.781 ] 00:13:48.781 }' 00:13:48.781 18:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.781 18:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.066 18:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:49.066 18:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:49.324 [2024-12-06 18:12:14.672856] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:50.258 18:12:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:50.258 18:12:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.258 18:12:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.258 18:12:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.258 18:12:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:50.258 18:12:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:13:50.258 18:12:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:50.258 18:12:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:50.258 18:12:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:50.258 18:12:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:50.258 18:12:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:50.258 18:12:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:50.258 18:12:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:50.258 18:12:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.258 18:12:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.258 18:12:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.258 18:12:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.258 18:12:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.258 18:12:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.258 18:12:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.258 18:12:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.258 18:12:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.258 18:12:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.258 "name": "raid_bdev1", 00:13:50.258 "uuid": "d1cd8cca-0f32-4cb3-b41d-05eac1ed8067", 00:13:50.258 "strip_size_kb": 64, 00:13:50.258 "state": "online", 00:13:50.258 "raid_level": "raid0", 00:13:50.258 "superblock": true, 00:13:50.258 "num_base_bdevs": 4, 00:13:50.258 "num_base_bdevs_discovered": 4, 00:13:50.258 "num_base_bdevs_operational": 4, 00:13:50.258 "base_bdevs_list": [ 00:13:50.258 { 00:13:50.258 "name": "BaseBdev1", 00:13:50.258 "uuid": "2a5824ec-f7f8-57f1-9b9c-999f7a603913", 00:13:50.258 "is_configured": true, 00:13:50.258 "data_offset": 2048, 00:13:50.258 "data_size": 63488 00:13:50.258 }, 00:13:50.258 { 00:13:50.258 "name": "BaseBdev2", 00:13:50.258 "uuid": "86543939-6d66-5489-b071-5cf6225da04f", 00:13:50.258 "is_configured": true, 00:13:50.258 "data_offset": 2048, 00:13:50.258 "data_size": 63488 00:13:50.258 }, 00:13:50.258 { 00:13:50.258 "name": "BaseBdev3", 00:13:50.258 "uuid": "d57f51d9-6fc6-57b5-adc4-5ce45a0efcd7", 00:13:50.258 "is_configured": true, 00:13:50.258 "data_offset": 2048, 00:13:50.258 "data_size": 63488 00:13:50.258 }, 00:13:50.258 { 00:13:50.258 "name": "BaseBdev4", 00:13:50.258 "uuid": "33b9fe89-9871-514d-a018-ae2ed82dff8f", 00:13:50.258 "is_configured": true, 00:13:50.258 "data_offset": 2048, 00:13:50.258 "data_size": 63488 00:13:50.258 } 00:13:50.258 ] 00:13:50.258 }' 00:13:50.258 18:12:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.258 18:12:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.823 18:12:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:50.823 18:12:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.823 18:12:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.823 [2024-12-06 18:12:16.065108] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:50.823 [2024-12-06 18:12:16.065147] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:50.823 [2024-12-06 18:12:16.068557] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:50.823 [2024-12-06 18:12:16.068633] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:50.823 [2024-12-06 18:12:16.068693] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:50.823 [2024-12-06 18:12:16.068711] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:50.823 { 00:13:50.823 "results": [ 00:13:50.823 { 00:13:50.823 "job": "raid_bdev1", 00:13:50.823 "core_mask": "0x1", 00:13:50.823 "workload": "randrw", 00:13:50.824 "percentage": 50, 00:13:50.824 "status": "finished", 00:13:50.824 "queue_depth": 1, 00:13:50.824 "io_size": 131072, 00:13:50.824 "runtime": 1.389413, 00:13:50.824 "iops": 10125.858905883277, 00:13:50.824 "mibps": 1265.7323632354096, 00:13:50.824 "io_failed": 1, 00:13:50.824 "io_timeout": 0, 00:13:50.824 "avg_latency_us": 136.85912670414163, 00:13:50.824 "min_latency_us": 39.79636363636364, 00:13:50.824 "max_latency_us": 1817.1345454545456 00:13:50.824 } 00:13:50.824 ], 00:13:50.824 "core_count": 1 00:13:50.824 } 00:13:50.824 18:12:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.824 18:12:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71157 00:13:50.824 18:12:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 71157 ']' 00:13:50.824 18:12:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 71157 00:13:50.824 18:12:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:13:50.824 18:12:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:50.824 18:12:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71157 00:13:50.824 killing process with pid 71157 00:13:50.824 18:12:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:50.824 18:12:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:50.824 18:12:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71157' 00:13:50.824 18:12:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 71157 00:13:50.824 18:12:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 71157 00:13:50.824 [2024-12-06 18:12:16.100047] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:51.081 [2024-12-06 18:12:16.397133] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:52.012 18:12:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:52.012 18:12:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Qz0BhQu2By 00:13:52.012 18:12:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:52.012 18:12:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:13:52.012 18:12:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:13:52.012 18:12:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:52.012 18:12:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:52.012 18:12:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:13:52.012 00:13:52.012 real 0m4.921s 00:13:52.012 user 0m6.105s 00:13:52.012 sys 0m0.577s 00:13:52.269 ************************************ 00:13:52.269 END TEST raid_read_error_test 00:13:52.269 ************************************ 00:13:52.269 18:12:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:52.269 18:12:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.269 18:12:17 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:13:52.269 18:12:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:52.269 18:12:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:52.269 18:12:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:52.269 ************************************ 00:13:52.269 START TEST raid_write_error_test 00:13:52.269 ************************************ 00:13:52.269 18:12:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:13:52.269 18:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:13:52.269 18:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:52.269 18:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:52.269 18:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:52.269 18:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:52.269 18:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:52.269 18:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:52.269 18:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:52.269 18:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:52.269 18:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:52.269 18:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:52.269 18:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:52.269 18:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:52.269 18:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:52.269 18:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:52.269 18:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:52.269 18:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:52.269 18:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:52.269 18:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:52.269 18:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:52.269 18:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:52.269 18:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:52.269 18:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:52.269 18:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:52.269 18:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:13:52.269 18:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:52.269 18:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:52.269 18:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:52.269 18:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.RBBw8ZJLHF 00:13:52.269 18:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71304 00:13:52.269 18:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71304 00:13:52.269 18:12:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71304 ']' 00:13:52.269 18:12:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.269 18:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:52.269 18:12:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:52.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.269 18:12:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.269 18:12:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:52.269 18:12:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.269 [2024-12-06 18:12:17.708618] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:13:52.269 [2024-12-06 18:12:17.708822] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71304 ] 00:13:52.527 [2024-12-06 18:12:17.890462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.527 [2024-12-06 18:12:18.021746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.785 [2024-12-06 18:12:18.231008] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:52.785 [2024-12-06 18:12:18.231067] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:53.354 18:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:53.354 18:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:53.354 18:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:53.354 18:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:53.354 18:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.354 18:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.354 BaseBdev1_malloc 00:13:53.354 18:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.354 18:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:53.354 18:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.354 18:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.354 true 00:13:53.354 18:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.354 18:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:53.354 18:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.354 18:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.354 [2024-12-06 18:12:18.764444] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:53.354 [2024-12-06 18:12:18.764526] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.354 [2024-12-06 18:12:18.764556] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:53.354 [2024-12-06 18:12:18.764584] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.354 [2024-12-06 18:12:18.767522] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.354 [2024-12-06 18:12:18.767596] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:53.354 BaseBdev1 00:13:53.354 18:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.354 18:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:53.354 18:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:53.354 18:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.354 18:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.354 BaseBdev2_malloc 00:13:53.354 18:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.354 18:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:53.354 18:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.354 18:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.354 true 00:13:53.354 18:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.354 18:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:53.354 18:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.354 18:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.354 [2024-12-06 18:12:18.822195] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:53.354 [2024-12-06 18:12:18.822261] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.354 [2024-12-06 18:12:18.822285] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:53.354 [2024-12-06 18:12:18.822301] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.354 [2024-12-06 18:12:18.825093] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.354 [2024-12-06 18:12:18.825143] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:53.354 BaseBdev2 00:13:53.354 18:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.354 18:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:53.354 18:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:53.354 18:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.354 18:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.657 BaseBdev3_malloc 00:13:53.657 18:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.657 18:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:53.657 18:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.657 18:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.657 true 00:13:53.657 18:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.657 18:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:53.657 18:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.657 18:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.657 [2024-12-06 18:12:18.891485] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:53.657 [2024-12-06 18:12:18.891548] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.657 [2024-12-06 18:12:18.891583] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:53.657 [2024-12-06 18:12:18.891611] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.657 [2024-12-06 18:12:18.894378] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.657 [2024-12-06 18:12:18.894428] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:53.657 BaseBdev3 00:13:53.657 18:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.657 18:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:53.657 18:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:53.657 18:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.657 18:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.657 BaseBdev4_malloc 00:13:53.657 18:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.657 18:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:53.657 18:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.657 18:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.657 true 00:13:53.657 18:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.657 18:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:53.657 18:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.657 18:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.657 [2024-12-06 18:12:18.947902] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:53.657 [2024-12-06 18:12:18.947970] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.657 [2024-12-06 18:12:18.947998] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:53.657 [2024-12-06 18:12:18.948016] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.657 [2024-12-06 18:12:18.950865] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.657 [2024-12-06 18:12:18.950921] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:53.657 BaseBdev4 00:13:53.657 18:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.657 18:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:53.657 18:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.657 18:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.657 [2024-12-06 18:12:18.955994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:53.657 [2024-12-06 18:12:18.958388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:53.657 [2024-12-06 18:12:18.958503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:53.657 [2024-12-06 18:12:18.958604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:53.657 [2024-12-06 18:12:18.958931] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:53.657 [2024-12-06 18:12:18.958958] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:53.657 [2024-12-06 18:12:18.959259] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:53.657 [2024-12-06 18:12:18.959469] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:53.657 [2024-12-06 18:12:18.959487] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:53.657 [2024-12-06 18:12:18.959682] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:53.657 18:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.657 18:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:53.657 18:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:53.657 18:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:53.657 18:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:53.657 18:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:53.657 18:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:53.657 18:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.657 18:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.657 18:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.657 18:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.657 18:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.657 18:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.657 18:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.657 18:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.657 18:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.657 18:12:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.657 "name": "raid_bdev1", 00:13:53.657 "uuid": "fe21a67f-764b-40fd-85e5-880ec009205b", 00:13:53.657 "strip_size_kb": 64, 00:13:53.657 "state": "online", 00:13:53.657 "raid_level": "raid0", 00:13:53.657 "superblock": true, 00:13:53.657 "num_base_bdevs": 4, 00:13:53.657 "num_base_bdevs_discovered": 4, 00:13:53.657 "num_base_bdevs_operational": 4, 00:13:53.657 "base_bdevs_list": [ 00:13:53.657 { 00:13:53.657 "name": "BaseBdev1", 00:13:53.657 "uuid": "e8fe5a49-fc28-5604-8b9d-25ac9037919c", 00:13:53.657 "is_configured": true, 00:13:53.657 "data_offset": 2048, 00:13:53.657 "data_size": 63488 00:13:53.657 }, 00:13:53.657 { 00:13:53.658 "name": "BaseBdev2", 00:13:53.658 "uuid": "525cdc13-2ef2-583d-a9af-0fb9e7efe3f2", 00:13:53.658 "is_configured": true, 00:13:53.658 "data_offset": 2048, 00:13:53.658 "data_size": 63488 00:13:53.658 }, 00:13:53.658 { 00:13:53.658 "name": "BaseBdev3", 00:13:53.658 "uuid": "9d3c15c8-b791-5e44-9f16-4bd47df55ef2", 00:13:53.658 "is_configured": true, 00:13:53.658 "data_offset": 2048, 00:13:53.658 "data_size": 63488 00:13:53.658 }, 00:13:53.658 { 00:13:53.658 "name": "BaseBdev4", 00:13:53.658 "uuid": "292db14e-9ed9-5d1a-9f03-397045f08e03", 00:13:53.658 "is_configured": true, 00:13:53.658 "data_offset": 2048, 00:13:53.658 "data_size": 63488 00:13:53.658 } 00:13:53.658 ] 00:13:53.658 }' 00:13:53.658 18:12:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.658 18:12:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.931 18:12:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:53.931 18:12:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:54.190 [2024-12-06 18:12:19.601537] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:55.125 18:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:55.125 18:12:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.125 18:12:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.125 18:12:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.125 18:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:55.125 18:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:13:55.125 18:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:55.125 18:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:55.125 18:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:55.125 18:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:55.125 18:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:55.125 18:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:55.125 18:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:55.125 18:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.125 18:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.125 18:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.125 18:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.125 18:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.125 18:12:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.125 18:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.125 18:12:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.125 18:12:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.125 18:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.125 "name": "raid_bdev1", 00:13:55.125 "uuid": "fe21a67f-764b-40fd-85e5-880ec009205b", 00:13:55.126 "strip_size_kb": 64, 00:13:55.126 "state": "online", 00:13:55.126 "raid_level": "raid0", 00:13:55.126 "superblock": true, 00:13:55.126 "num_base_bdevs": 4, 00:13:55.126 "num_base_bdevs_discovered": 4, 00:13:55.126 "num_base_bdevs_operational": 4, 00:13:55.126 "base_bdevs_list": [ 00:13:55.126 { 00:13:55.126 "name": "BaseBdev1", 00:13:55.126 "uuid": "e8fe5a49-fc28-5604-8b9d-25ac9037919c", 00:13:55.126 "is_configured": true, 00:13:55.126 "data_offset": 2048, 00:13:55.126 "data_size": 63488 00:13:55.126 }, 00:13:55.126 { 00:13:55.126 "name": "BaseBdev2", 00:13:55.126 "uuid": "525cdc13-2ef2-583d-a9af-0fb9e7efe3f2", 00:13:55.126 "is_configured": true, 00:13:55.126 "data_offset": 2048, 00:13:55.126 "data_size": 63488 00:13:55.126 }, 00:13:55.126 { 00:13:55.126 "name": "BaseBdev3", 00:13:55.126 "uuid": "9d3c15c8-b791-5e44-9f16-4bd47df55ef2", 00:13:55.126 "is_configured": true, 00:13:55.126 "data_offset": 2048, 00:13:55.126 "data_size": 63488 00:13:55.126 }, 00:13:55.126 { 00:13:55.126 "name": "BaseBdev4", 00:13:55.126 "uuid": "292db14e-9ed9-5d1a-9f03-397045f08e03", 00:13:55.126 "is_configured": true, 00:13:55.126 "data_offset": 2048, 00:13:55.126 "data_size": 63488 00:13:55.126 } 00:13:55.126 ] 00:13:55.126 }' 00:13:55.126 18:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.126 18:12:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.694 18:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:55.694 18:12:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.694 18:12:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.694 [2024-12-06 18:12:20.984341] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:55.694 [2024-12-06 18:12:20.984384] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:55.694 [2024-12-06 18:12:20.987843] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:55.694 [2024-12-06 18:12:20.987924] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:55.694 [2024-12-06 18:12:20.987994] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:55.694 [2024-12-06 18:12:20.988023] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:55.694 { 00:13:55.694 "results": [ 00:13:55.694 { 00:13:55.694 "job": "raid_bdev1", 00:13:55.694 "core_mask": "0x1", 00:13:55.694 "workload": "randrw", 00:13:55.694 "percentage": 50, 00:13:55.694 "status": "finished", 00:13:55.694 "queue_depth": 1, 00:13:55.694 "io_size": 131072, 00:13:55.694 "runtime": 1.380406, 00:13:55.694 "iops": 10490.391957148839, 00:13:55.694 "mibps": 1311.2989946436048, 00:13:55.694 "io_failed": 1, 00:13:55.694 "io_timeout": 0, 00:13:55.694 "avg_latency_us": 132.32442530539478, 00:13:55.694 "min_latency_us": 42.123636363636365, 00:13:55.694 "max_latency_us": 1832.0290909090909 00:13:55.694 } 00:13:55.694 ], 00:13:55.694 "core_count": 1 00:13:55.694 } 00:13:55.694 18:12:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.694 18:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71304 00:13:55.694 18:12:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71304 ']' 00:13:55.694 18:12:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71304 00:13:55.694 18:12:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:13:55.694 18:12:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:55.694 18:12:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71304 00:13:55.694 18:12:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:55.694 18:12:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:55.694 killing process with pid 71304 00:13:55.694 18:12:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71304' 00:13:55.694 18:12:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71304 00:13:55.694 [2024-12-06 18:12:21.022002] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:55.694 18:12:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71304 00:13:55.953 [2024-12-06 18:12:21.313635] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:57.331 18:12:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.RBBw8ZJLHF 00:13:57.331 18:12:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:57.331 18:12:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:57.331 18:12:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:13:57.331 18:12:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:13:57.331 18:12:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:57.331 18:12:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:57.331 18:12:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:13:57.331 00:13:57.331 real 0m4.832s 00:13:57.331 user 0m5.985s 00:13:57.331 sys 0m0.599s 00:13:57.331 18:12:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:57.331 ************************************ 00:13:57.331 END TEST raid_write_error_test 00:13:57.331 ************************************ 00:13:57.331 18:12:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.331 18:12:22 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:57.331 18:12:22 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:13:57.331 18:12:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:57.331 18:12:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:57.331 18:12:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:57.331 ************************************ 00:13:57.331 START TEST raid_state_function_test 00:13:57.331 ************************************ 00:13:57.331 18:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:13:57.331 18:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:13:57.331 18:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:57.331 18:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:57.331 18:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:57.331 18:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:57.331 18:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:57.331 18:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:57.331 18:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:57.331 18:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:57.331 18:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:57.331 18:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:57.331 18:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:57.331 18:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:57.331 18:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:57.331 18:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:57.331 18:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:57.331 18:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:57.331 18:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:57.331 18:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:57.331 18:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:57.331 18:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:57.331 18:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:57.331 18:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:57.331 18:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:57.331 18:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:13:57.331 18:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:57.331 18:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:57.331 18:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:57.331 18:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:57.331 18:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71453 00:13:57.331 Process raid pid: 71453 00:13:57.331 18:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:57.331 18:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71453' 00:13:57.331 18:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71453 00:13:57.331 18:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71453 ']' 00:13:57.331 18:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.331 18:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:57.331 18:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.331 18:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:57.331 18:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.331 [2024-12-06 18:12:22.576851] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:13:57.331 [2024-12-06 18:12:22.577005] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:57.331 [2024-12-06 18:12:22.755551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.590 [2024-12-06 18:12:22.912659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.850 [2024-12-06 18:12:23.165295] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:57.850 [2024-12-06 18:12:23.165379] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:58.109 18:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:58.109 18:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:13:58.109 18:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:58.109 18:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.109 18:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.109 [2024-12-06 18:12:23.605085] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:58.109 [2024-12-06 18:12:23.605159] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:58.109 [2024-12-06 18:12:23.605176] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:58.109 [2024-12-06 18:12:23.605192] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:58.109 [2024-12-06 18:12:23.605202] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:58.109 [2024-12-06 18:12:23.605215] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:58.109 [2024-12-06 18:12:23.605226] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:58.109 [2024-12-06 18:12:23.605240] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:58.109 18:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.109 18:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:58.110 18:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:58.110 18:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:58.110 18:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:58.110 18:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:58.110 18:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:58.110 18:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.110 18:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.110 18:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.110 18:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.110 18:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.110 18:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.110 18:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.110 18:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.368 18:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.368 18:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.368 "name": "Existed_Raid", 00:13:58.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.368 "strip_size_kb": 64, 00:13:58.368 "state": "configuring", 00:13:58.368 "raid_level": "concat", 00:13:58.368 "superblock": false, 00:13:58.368 "num_base_bdevs": 4, 00:13:58.368 "num_base_bdevs_discovered": 0, 00:13:58.368 "num_base_bdevs_operational": 4, 00:13:58.368 "base_bdevs_list": [ 00:13:58.368 { 00:13:58.368 "name": "BaseBdev1", 00:13:58.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.368 "is_configured": false, 00:13:58.368 "data_offset": 0, 00:13:58.368 "data_size": 0 00:13:58.368 }, 00:13:58.368 { 00:13:58.368 "name": "BaseBdev2", 00:13:58.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.368 "is_configured": false, 00:13:58.368 "data_offset": 0, 00:13:58.368 "data_size": 0 00:13:58.368 }, 00:13:58.368 { 00:13:58.368 "name": "BaseBdev3", 00:13:58.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.368 "is_configured": false, 00:13:58.368 "data_offset": 0, 00:13:58.368 "data_size": 0 00:13:58.368 }, 00:13:58.368 { 00:13:58.368 "name": "BaseBdev4", 00:13:58.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.369 "is_configured": false, 00:13:58.369 "data_offset": 0, 00:13:58.369 "data_size": 0 00:13:58.369 } 00:13:58.369 ] 00:13:58.369 }' 00:13:58.369 18:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.369 18:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.629 18:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:58.629 18:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.630 18:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.630 [2024-12-06 18:12:24.097166] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:58.630 [2024-12-06 18:12:24.097219] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:58.630 18:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.630 18:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:58.630 18:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.630 18:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.630 [2024-12-06 18:12:24.105175] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:58.630 [2024-12-06 18:12:24.105238] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:58.630 [2024-12-06 18:12:24.105253] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:58.630 [2024-12-06 18:12:24.105268] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:58.630 [2024-12-06 18:12:24.105277] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:58.630 [2024-12-06 18:12:24.105291] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:58.630 [2024-12-06 18:12:24.105301] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:58.630 [2024-12-06 18:12:24.105314] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:58.630 18:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.630 18:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:58.630 18:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.630 18:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.893 [2024-12-06 18:12:24.150976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:58.893 BaseBdev1 00:13:58.893 18:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.893 18:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:58.893 18:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:58.893 18:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:58.893 18:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:58.893 18:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:58.893 18:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:58.893 18:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:58.893 18:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.893 18:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.893 18:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.893 18:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:58.893 18:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.893 18:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.893 [ 00:13:58.893 { 00:13:58.893 "name": "BaseBdev1", 00:13:58.893 "aliases": [ 00:13:58.893 "7356e998-8c76-4b6a-a387-cf66ea6383ea" 00:13:58.893 ], 00:13:58.893 "product_name": "Malloc disk", 00:13:58.893 "block_size": 512, 00:13:58.893 "num_blocks": 65536, 00:13:58.893 "uuid": "7356e998-8c76-4b6a-a387-cf66ea6383ea", 00:13:58.893 "assigned_rate_limits": { 00:13:58.893 "rw_ios_per_sec": 0, 00:13:58.893 "rw_mbytes_per_sec": 0, 00:13:58.893 "r_mbytes_per_sec": 0, 00:13:58.893 "w_mbytes_per_sec": 0 00:13:58.893 }, 00:13:58.893 "claimed": true, 00:13:58.893 "claim_type": "exclusive_write", 00:13:58.893 "zoned": false, 00:13:58.893 "supported_io_types": { 00:13:58.893 "read": true, 00:13:58.893 "write": true, 00:13:58.893 "unmap": true, 00:13:58.893 "flush": true, 00:13:58.893 "reset": true, 00:13:58.893 "nvme_admin": false, 00:13:58.893 "nvme_io": false, 00:13:58.893 "nvme_io_md": false, 00:13:58.893 "write_zeroes": true, 00:13:58.893 "zcopy": true, 00:13:58.893 "get_zone_info": false, 00:13:58.893 "zone_management": false, 00:13:58.893 "zone_append": false, 00:13:58.893 "compare": false, 00:13:58.893 "compare_and_write": false, 00:13:58.893 "abort": true, 00:13:58.893 "seek_hole": false, 00:13:58.893 "seek_data": false, 00:13:58.893 "copy": true, 00:13:58.893 "nvme_iov_md": false 00:13:58.893 }, 00:13:58.893 "memory_domains": [ 00:13:58.893 { 00:13:58.893 "dma_device_id": "system", 00:13:58.893 "dma_device_type": 1 00:13:58.893 }, 00:13:58.893 { 00:13:58.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:58.893 "dma_device_type": 2 00:13:58.893 } 00:13:58.893 ], 00:13:58.893 "driver_specific": {} 00:13:58.893 } 00:13:58.893 ] 00:13:58.893 18:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.893 18:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:58.893 18:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:58.893 18:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:58.893 18:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:58.893 18:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:58.893 18:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:58.893 18:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:58.893 18:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.893 18:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.893 18:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.893 18:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.893 18:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.893 18:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.893 18:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.893 18:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.893 18:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.893 18:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.893 "name": "Existed_Raid", 00:13:58.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.893 "strip_size_kb": 64, 00:13:58.893 "state": "configuring", 00:13:58.893 "raid_level": "concat", 00:13:58.893 "superblock": false, 00:13:58.893 "num_base_bdevs": 4, 00:13:58.893 "num_base_bdevs_discovered": 1, 00:13:58.893 "num_base_bdevs_operational": 4, 00:13:58.893 "base_bdevs_list": [ 00:13:58.893 { 00:13:58.893 "name": "BaseBdev1", 00:13:58.893 "uuid": "7356e998-8c76-4b6a-a387-cf66ea6383ea", 00:13:58.894 "is_configured": true, 00:13:58.894 "data_offset": 0, 00:13:58.894 "data_size": 65536 00:13:58.894 }, 00:13:58.894 { 00:13:58.894 "name": "BaseBdev2", 00:13:58.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.894 "is_configured": false, 00:13:58.894 "data_offset": 0, 00:13:58.894 "data_size": 0 00:13:58.894 }, 00:13:58.894 { 00:13:58.894 "name": "BaseBdev3", 00:13:58.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.894 "is_configured": false, 00:13:58.894 "data_offset": 0, 00:13:58.894 "data_size": 0 00:13:58.894 }, 00:13:58.894 { 00:13:58.894 "name": "BaseBdev4", 00:13:58.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.894 "is_configured": false, 00:13:58.894 "data_offset": 0, 00:13:58.894 "data_size": 0 00:13:58.894 } 00:13:58.894 ] 00:13:58.894 }' 00:13:58.894 18:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.894 18:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.462 18:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:59.462 18:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.462 18:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.462 [2024-12-06 18:12:24.723223] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:59.462 [2024-12-06 18:12:24.723307] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:59.462 18:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.462 18:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:59.462 18:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.462 18:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.462 [2024-12-06 18:12:24.731281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:59.462 [2024-12-06 18:12:24.733677] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:59.462 [2024-12-06 18:12:24.733736] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:59.462 [2024-12-06 18:12:24.733752] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:59.462 [2024-12-06 18:12:24.733785] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:59.462 [2024-12-06 18:12:24.733798] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:59.462 [2024-12-06 18:12:24.733813] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:59.462 18:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.462 18:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:59.462 18:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:59.462 18:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:59.462 18:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:59.462 18:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:59.462 18:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:59.462 18:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:59.462 18:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:59.462 18:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.462 18:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.462 18:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.462 18:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.462 18:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.462 18:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.462 18:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:59.462 18:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.462 18:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.462 18:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.462 "name": "Existed_Raid", 00:13:59.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.462 "strip_size_kb": 64, 00:13:59.462 "state": "configuring", 00:13:59.462 "raid_level": "concat", 00:13:59.462 "superblock": false, 00:13:59.462 "num_base_bdevs": 4, 00:13:59.462 "num_base_bdevs_discovered": 1, 00:13:59.462 "num_base_bdevs_operational": 4, 00:13:59.462 "base_bdevs_list": [ 00:13:59.462 { 00:13:59.462 "name": "BaseBdev1", 00:13:59.462 "uuid": "7356e998-8c76-4b6a-a387-cf66ea6383ea", 00:13:59.462 "is_configured": true, 00:13:59.462 "data_offset": 0, 00:13:59.462 "data_size": 65536 00:13:59.462 }, 00:13:59.462 { 00:13:59.462 "name": "BaseBdev2", 00:13:59.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.462 "is_configured": false, 00:13:59.462 "data_offset": 0, 00:13:59.462 "data_size": 0 00:13:59.462 }, 00:13:59.462 { 00:13:59.462 "name": "BaseBdev3", 00:13:59.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.463 "is_configured": false, 00:13:59.463 "data_offset": 0, 00:13:59.463 "data_size": 0 00:13:59.463 }, 00:13:59.463 { 00:13:59.463 "name": "BaseBdev4", 00:13:59.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.463 "is_configured": false, 00:13:59.463 "data_offset": 0, 00:13:59.463 "data_size": 0 00:13:59.463 } 00:13:59.463 ] 00:13:59.463 }' 00:13:59.463 18:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.463 18:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.722 18:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:59.722 18:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.722 18:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.981 [2024-12-06 18:12:25.278564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:59.981 BaseBdev2 00:13:59.981 18:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.981 18:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:59.981 18:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:59.981 18:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:59.981 18:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:59.981 18:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:59.981 18:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:59.981 18:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:59.981 18:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.981 18:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.981 18:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.981 18:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:59.981 18:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.981 18:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.981 [ 00:13:59.981 { 00:13:59.981 "name": "BaseBdev2", 00:13:59.981 "aliases": [ 00:13:59.981 "82b80d23-926d-4e2e-b120-8e362d8f12f6" 00:13:59.981 ], 00:13:59.981 "product_name": "Malloc disk", 00:13:59.981 "block_size": 512, 00:13:59.981 "num_blocks": 65536, 00:13:59.981 "uuid": "82b80d23-926d-4e2e-b120-8e362d8f12f6", 00:13:59.981 "assigned_rate_limits": { 00:13:59.981 "rw_ios_per_sec": 0, 00:13:59.981 "rw_mbytes_per_sec": 0, 00:13:59.981 "r_mbytes_per_sec": 0, 00:13:59.981 "w_mbytes_per_sec": 0 00:13:59.981 }, 00:13:59.981 "claimed": true, 00:13:59.981 "claim_type": "exclusive_write", 00:13:59.981 "zoned": false, 00:13:59.981 "supported_io_types": { 00:13:59.981 "read": true, 00:13:59.981 "write": true, 00:13:59.981 "unmap": true, 00:13:59.981 "flush": true, 00:13:59.981 "reset": true, 00:13:59.981 "nvme_admin": false, 00:13:59.981 "nvme_io": false, 00:13:59.981 "nvme_io_md": false, 00:13:59.981 "write_zeroes": true, 00:13:59.981 "zcopy": true, 00:13:59.981 "get_zone_info": false, 00:13:59.981 "zone_management": false, 00:13:59.981 "zone_append": false, 00:13:59.981 "compare": false, 00:13:59.981 "compare_and_write": false, 00:13:59.981 "abort": true, 00:13:59.981 "seek_hole": false, 00:13:59.981 "seek_data": false, 00:13:59.981 "copy": true, 00:13:59.981 "nvme_iov_md": false 00:13:59.981 }, 00:13:59.981 "memory_domains": [ 00:13:59.981 { 00:13:59.981 "dma_device_id": "system", 00:13:59.981 "dma_device_type": 1 00:13:59.981 }, 00:13:59.981 { 00:13:59.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.981 "dma_device_type": 2 00:13:59.981 } 00:13:59.981 ], 00:13:59.981 "driver_specific": {} 00:13:59.981 } 00:13:59.981 ] 00:13:59.981 18:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.981 18:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:59.981 18:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:59.981 18:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:59.981 18:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:59.981 18:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:59.981 18:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:59.981 18:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:59.981 18:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:59.981 18:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:59.981 18:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.981 18:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.981 18:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.981 18:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.981 18:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.981 18:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:59.981 18:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.981 18:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.981 18:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.981 18:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.981 "name": "Existed_Raid", 00:13:59.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.981 "strip_size_kb": 64, 00:13:59.981 "state": "configuring", 00:13:59.981 "raid_level": "concat", 00:13:59.981 "superblock": false, 00:13:59.981 "num_base_bdevs": 4, 00:13:59.981 "num_base_bdevs_discovered": 2, 00:13:59.981 "num_base_bdevs_operational": 4, 00:13:59.981 "base_bdevs_list": [ 00:13:59.981 { 00:13:59.981 "name": "BaseBdev1", 00:13:59.981 "uuid": "7356e998-8c76-4b6a-a387-cf66ea6383ea", 00:13:59.981 "is_configured": true, 00:13:59.981 "data_offset": 0, 00:13:59.981 "data_size": 65536 00:13:59.981 }, 00:13:59.981 { 00:13:59.981 "name": "BaseBdev2", 00:13:59.981 "uuid": "82b80d23-926d-4e2e-b120-8e362d8f12f6", 00:13:59.981 "is_configured": true, 00:13:59.981 "data_offset": 0, 00:13:59.981 "data_size": 65536 00:13:59.981 }, 00:13:59.981 { 00:13:59.981 "name": "BaseBdev3", 00:13:59.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.981 "is_configured": false, 00:13:59.981 "data_offset": 0, 00:13:59.981 "data_size": 0 00:13:59.981 }, 00:13:59.981 { 00:13:59.981 "name": "BaseBdev4", 00:13:59.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.981 "is_configured": false, 00:13:59.981 "data_offset": 0, 00:13:59.981 "data_size": 0 00:13:59.981 } 00:13:59.981 ] 00:13:59.981 }' 00:13:59.981 18:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.981 18:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.549 18:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:00.549 18:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.549 18:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.549 [2024-12-06 18:12:25.872949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:00.549 BaseBdev3 00:14:00.549 18:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.549 18:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:00.550 18:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:00.550 18:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:00.550 18:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:00.550 18:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:00.550 18:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:00.550 18:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:00.550 18:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.550 18:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.550 18:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.550 18:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:00.550 18:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.550 18:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.550 [ 00:14:00.550 { 00:14:00.550 "name": "BaseBdev3", 00:14:00.550 "aliases": [ 00:14:00.550 "3fcd9f75-ef57-476e-94fa-22d233edc296" 00:14:00.550 ], 00:14:00.550 "product_name": "Malloc disk", 00:14:00.550 "block_size": 512, 00:14:00.550 "num_blocks": 65536, 00:14:00.550 "uuid": "3fcd9f75-ef57-476e-94fa-22d233edc296", 00:14:00.550 "assigned_rate_limits": { 00:14:00.550 "rw_ios_per_sec": 0, 00:14:00.550 "rw_mbytes_per_sec": 0, 00:14:00.550 "r_mbytes_per_sec": 0, 00:14:00.550 "w_mbytes_per_sec": 0 00:14:00.550 }, 00:14:00.550 "claimed": true, 00:14:00.550 "claim_type": "exclusive_write", 00:14:00.550 "zoned": false, 00:14:00.550 "supported_io_types": { 00:14:00.550 "read": true, 00:14:00.550 "write": true, 00:14:00.550 "unmap": true, 00:14:00.550 "flush": true, 00:14:00.550 "reset": true, 00:14:00.550 "nvme_admin": false, 00:14:00.550 "nvme_io": false, 00:14:00.550 "nvme_io_md": false, 00:14:00.550 "write_zeroes": true, 00:14:00.550 "zcopy": true, 00:14:00.550 "get_zone_info": false, 00:14:00.550 "zone_management": false, 00:14:00.550 "zone_append": false, 00:14:00.550 "compare": false, 00:14:00.550 "compare_and_write": false, 00:14:00.550 "abort": true, 00:14:00.550 "seek_hole": false, 00:14:00.550 "seek_data": false, 00:14:00.550 "copy": true, 00:14:00.550 "nvme_iov_md": false 00:14:00.550 }, 00:14:00.550 "memory_domains": [ 00:14:00.550 { 00:14:00.550 "dma_device_id": "system", 00:14:00.550 "dma_device_type": 1 00:14:00.550 }, 00:14:00.550 { 00:14:00.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:00.550 "dma_device_type": 2 00:14:00.550 } 00:14:00.550 ], 00:14:00.550 "driver_specific": {} 00:14:00.550 } 00:14:00.550 ] 00:14:00.550 18:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.550 18:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:00.550 18:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:00.550 18:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:00.550 18:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:00.550 18:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:00.550 18:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:00.550 18:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:00.550 18:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.550 18:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:00.550 18:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.550 18:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.550 18:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.550 18:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.550 18:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.550 18:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.550 18:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.550 18:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.550 18:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.550 18:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.550 "name": "Existed_Raid", 00:14:00.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.550 "strip_size_kb": 64, 00:14:00.550 "state": "configuring", 00:14:00.550 "raid_level": "concat", 00:14:00.550 "superblock": false, 00:14:00.550 "num_base_bdevs": 4, 00:14:00.550 "num_base_bdevs_discovered": 3, 00:14:00.550 "num_base_bdevs_operational": 4, 00:14:00.550 "base_bdevs_list": [ 00:14:00.550 { 00:14:00.550 "name": "BaseBdev1", 00:14:00.550 "uuid": "7356e998-8c76-4b6a-a387-cf66ea6383ea", 00:14:00.550 "is_configured": true, 00:14:00.550 "data_offset": 0, 00:14:00.550 "data_size": 65536 00:14:00.550 }, 00:14:00.550 { 00:14:00.550 "name": "BaseBdev2", 00:14:00.550 "uuid": "82b80d23-926d-4e2e-b120-8e362d8f12f6", 00:14:00.550 "is_configured": true, 00:14:00.550 "data_offset": 0, 00:14:00.550 "data_size": 65536 00:14:00.550 }, 00:14:00.550 { 00:14:00.550 "name": "BaseBdev3", 00:14:00.550 "uuid": "3fcd9f75-ef57-476e-94fa-22d233edc296", 00:14:00.550 "is_configured": true, 00:14:00.550 "data_offset": 0, 00:14:00.550 "data_size": 65536 00:14:00.550 }, 00:14:00.550 { 00:14:00.550 "name": "BaseBdev4", 00:14:00.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.550 "is_configured": false, 00:14:00.550 "data_offset": 0, 00:14:00.550 "data_size": 0 00:14:00.550 } 00:14:00.550 ] 00:14:00.550 }' 00:14:00.550 18:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.550 18:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.119 18:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:01.119 18:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.119 18:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.119 [2024-12-06 18:12:26.469090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:01.119 [2024-12-06 18:12:26.469156] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:01.119 [2024-12-06 18:12:26.469169] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:14:01.119 [2024-12-06 18:12:26.469528] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:01.119 [2024-12-06 18:12:26.469731] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:01.119 [2024-12-06 18:12:26.469751] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:01.119 [2024-12-06 18:12:26.470129] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:01.119 BaseBdev4 00:14:01.119 18:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.119 18:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:01.119 18:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:01.119 18:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:01.119 18:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:01.119 18:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:01.119 18:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:01.119 18:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:01.119 18:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.119 18:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.119 18:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.119 18:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:01.119 18:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.119 18:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.119 [ 00:14:01.119 { 00:14:01.119 "name": "BaseBdev4", 00:14:01.119 "aliases": [ 00:14:01.119 "1efda9de-9071-41f4-912b-105995c27143" 00:14:01.119 ], 00:14:01.119 "product_name": "Malloc disk", 00:14:01.119 "block_size": 512, 00:14:01.119 "num_blocks": 65536, 00:14:01.119 "uuid": "1efda9de-9071-41f4-912b-105995c27143", 00:14:01.119 "assigned_rate_limits": { 00:14:01.119 "rw_ios_per_sec": 0, 00:14:01.119 "rw_mbytes_per_sec": 0, 00:14:01.119 "r_mbytes_per_sec": 0, 00:14:01.119 "w_mbytes_per_sec": 0 00:14:01.119 }, 00:14:01.119 "claimed": true, 00:14:01.119 "claim_type": "exclusive_write", 00:14:01.119 "zoned": false, 00:14:01.120 "supported_io_types": { 00:14:01.120 "read": true, 00:14:01.120 "write": true, 00:14:01.120 "unmap": true, 00:14:01.120 "flush": true, 00:14:01.120 "reset": true, 00:14:01.120 "nvme_admin": false, 00:14:01.120 "nvme_io": false, 00:14:01.120 "nvme_io_md": false, 00:14:01.120 "write_zeroes": true, 00:14:01.120 "zcopy": true, 00:14:01.120 "get_zone_info": false, 00:14:01.120 "zone_management": false, 00:14:01.120 "zone_append": false, 00:14:01.120 "compare": false, 00:14:01.120 "compare_and_write": false, 00:14:01.120 "abort": true, 00:14:01.120 "seek_hole": false, 00:14:01.120 "seek_data": false, 00:14:01.120 "copy": true, 00:14:01.120 "nvme_iov_md": false 00:14:01.120 }, 00:14:01.120 "memory_domains": [ 00:14:01.120 { 00:14:01.120 "dma_device_id": "system", 00:14:01.120 "dma_device_type": 1 00:14:01.120 }, 00:14:01.120 { 00:14:01.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.120 "dma_device_type": 2 00:14:01.120 } 00:14:01.120 ], 00:14:01.120 "driver_specific": {} 00:14:01.120 } 00:14:01.120 ] 00:14:01.120 18:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.120 18:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:01.120 18:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:01.120 18:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:01.120 18:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:01.120 18:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:01.120 18:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:01.120 18:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:01.120 18:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:01.120 18:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:01.120 18:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.120 18:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.120 18:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.120 18:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.120 18:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.120 18:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.120 18:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.120 18:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.120 18:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.120 18:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.120 "name": "Existed_Raid", 00:14:01.120 "uuid": "fa0985a7-94bd-4384-94a2-21751368a4a0", 00:14:01.120 "strip_size_kb": 64, 00:14:01.120 "state": "online", 00:14:01.120 "raid_level": "concat", 00:14:01.120 "superblock": false, 00:14:01.120 "num_base_bdevs": 4, 00:14:01.120 "num_base_bdevs_discovered": 4, 00:14:01.120 "num_base_bdevs_operational": 4, 00:14:01.120 "base_bdevs_list": [ 00:14:01.120 { 00:14:01.120 "name": "BaseBdev1", 00:14:01.120 "uuid": "7356e998-8c76-4b6a-a387-cf66ea6383ea", 00:14:01.120 "is_configured": true, 00:14:01.120 "data_offset": 0, 00:14:01.120 "data_size": 65536 00:14:01.120 }, 00:14:01.120 { 00:14:01.120 "name": "BaseBdev2", 00:14:01.120 "uuid": "82b80d23-926d-4e2e-b120-8e362d8f12f6", 00:14:01.120 "is_configured": true, 00:14:01.120 "data_offset": 0, 00:14:01.120 "data_size": 65536 00:14:01.120 }, 00:14:01.120 { 00:14:01.120 "name": "BaseBdev3", 00:14:01.120 "uuid": "3fcd9f75-ef57-476e-94fa-22d233edc296", 00:14:01.120 "is_configured": true, 00:14:01.120 "data_offset": 0, 00:14:01.120 "data_size": 65536 00:14:01.120 }, 00:14:01.120 { 00:14:01.120 "name": "BaseBdev4", 00:14:01.120 "uuid": "1efda9de-9071-41f4-912b-105995c27143", 00:14:01.120 "is_configured": true, 00:14:01.120 "data_offset": 0, 00:14:01.120 "data_size": 65536 00:14:01.120 } 00:14:01.120 ] 00:14:01.120 }' 00:14:01.120 18:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.120 18:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.689 18:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:01.689 18:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:01.689 18:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:01.689 18:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:01.689 18:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:01.689 18:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:01.689 18:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:01.689 18:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.689 18:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.689 18:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:01.689 [2024-12-06 18:12:27.006059] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:01.689 18:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.689 18:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:01.689 "name": "Existed_Raid", 00:14:01.689 "aliases": [ 00:14:01.689 "fa0985a7-94bd-4384-94a2-21751368a4a0" 00:14:01.689 ], 00:14:01.689 "product_name": "Raid Volume", 00:14:01.689 "block_size": 512, 00:14:01.689 "num_blocks": 262144, 00:14:01.689 "uuid": "fa0985a7-94bd-4384-94a2-21751368a4a0", 00:14:01.689 "assigned_rate_limits": { 00:14:01.689 "rw_ios_per_sec": 0, 00:14:01.689 "rw_mbytes_per_sec": 0, 00:14:01.689 "r_mbytes_per_sec": 0, 00:14:01.689 "w_mbytes_per_sec": 0 00:14:01.689 }, 00:14:01.689 "claimed": false, 00:14:01.689 "zoned": false, 00:14:01.689 "supported_io_types": { 00:14:01.689 "read": true, 00:14:01.689 "write": true, 00:14:01.689 "unmap": true, 00:14:01.689 "flush": true, 00:14:01.689 "reset": true, 00:14:01.689 "nvme_admin": false, 00:14:01.689 "nvme_io": false, 00:14:01.689 "nvme_io_md": false, 00:14:01.689 "write_zeroes": true, 00:14:01.689 "zcopy": false, 00:14:01.689 "get_zone_info": false, 00:14:01.689 "zone_management": false, 00:14:01.689 "zone_append": false, 00:14:01.689 "compare": false, 00:14:01.689 "compare_and_write": false, 00:14:01.689 "abort": false, 00:14:01.689 "seek_hole": false, 00:14:01.689 "seek_data": false, 00:14:01.689 "copy": false, 00:14:01.689 "nvme_iov_md": false 00:14:01.689 }, 00:14:01.689 "memory_domains": [ 00:14:01.689 { 00:14:01.689 "dma_device_id": "system", 00:14:01.689 "dma_device_type": 1 00:14:01.689 }, 00:14:01.689 { 00:14:01.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.689 "dma_device_type": 2 00:14:01.689 }, 00:14:01.689 { 00:14:01.689 "dma_device_id": "system", 00:14:01.689 "dma_device_type": 1 00:14:01.689 }, 00:14:01.689 { 00:14:01.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.689 "dma_device_type": 2 00:14:01.689 }, 00:14:01.689 { 00:14:01.689 "dma_device_id": "system", 00:14:01.689 "dma_device_type": 1 00:14:01.689 }, 00:14:01.689 { 00:14:01.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.689 "dma_device_type": 2 00:14:01.689 }, 00:14:01.689 { 00:14:01.689 "dma_device_id": "system", 00:14:01.689 "dma_device_type": 1 00:14:01.689 }, 00:14:01.689 { 00:14:01.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.689 "dma_device_type": 2 00:14:01.689 } 00:14:01.689 ], 00:14:01.689 "driver_specific": { 00:14:01.689 "raid": { 00:14:01.689 "uuid": "fa0985a7-94bd-4384-94a2-21751368a4a0", 00:14:01.689 "strip_size_kb": 64, 00:14:01.689 "state": "online", 00:14:01.689 "raid_level": "concat", 00:14:01.689 "superblock": false, 00:14:01.689 "num_base_bdevs": 4, 00:14:01.689 "num_base_bdevs_discovered": 4, 00:14:01.689 "num_base_bdevs_operational": 4, 00:14:01.689 "base_bdevs_list": [ 00:14:01.689 { 00:14:01.689 "name": "BaseBdev1", 00:14:01.689 "uuid": "7356e998-8c76-4b6a-a387-cf66ea6383ea", 00:14:01.689 "is_configured": true, 00:14:01.689 "data_offset": 0, 00:14:01.689 "data_size": 65536 00:14:01.689 }, 00:14:01.689 { 00:14:01.689 "name": "BaseBdev2", 00:14:01.689 "uuid": "82b80d23-926d-4e2e-b120-8e362d8f12f6", 00:14:01.689 "is_configured": true, 00:14:01.689 "data_offset": 0, 00:14:01.689 "data_size": 65536 00:14:01.689 }, 00:14:01.689 { 00:14:01.689 "name": "BaseBdev3", 00:14:01.689 "uuid": "3fcd9f75-ef57-476e-94fa-22d233edc296", 00:14:01.689 "is_configured": true, 00:14:01.689 "data_offset": 0, 00:14:01.689 "data_size": 65536 00:14:01.689 }, 00:14:01.689 { 00:14:01.689 "name": "BaseBdev4", 00:14:01.689 "uuid": "1efda9de-9071-41f4-912b-105995c27143", 00:14:01.689 "is_configured": true, 00:14:01.689 "data_offset": 0, 00:14:01.689 "data_size": 65536 00:14:01.689 } 00:14:01.689 ] 00:14:01.689 } 00:14:01.689 } 00:14:01.689 }' 00:14:01.689 18:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:01.689 18:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:01.689 BaseBdev2 00:14:01.689 BaseBdev3 00:14:01.689 BaseBdev4' 00:14:01.689 18:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:01.689 18:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:01.690 18:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:01.690 18:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:01.690 18:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.690 18:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.690 18:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:01.690 18:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.690 18:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:01.690 18:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:01.690 18:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:01.690 18:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:01.690 18:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.690 18:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.949 18:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:01.949 18:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.949 18:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:01.949 18:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:01.949 18:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:01.949 18:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:01.949 18:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.949 18:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.949 18:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:01.949 18:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.949 18:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:01.949 18:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:01.949 18:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:01.949 18:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:01.949 18:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.949 18:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:01.949 18:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.949 18:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.949 18:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:01.949 18:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:01.949 18:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:01.949 18:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.949 18:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.949 [2024-12-06 18:12:27.365713] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:01.949 [2024-12-06 18:12:27.365765] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:01.949 [2024-12-06 18:12:27.365869] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:01.949 18:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.949 18:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:01.949 18:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:14:01.949 18:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:01.949 18:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:01.949 18:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:14:01.949 18:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:14:01.949 18:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:01.949 18:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:14:01.949 18:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:01.949 18:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:01.949 18:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:01.949 18:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.949 18:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.949 18:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.949 18:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.949 18:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.949 18:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.949 18:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.949 18:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.208 18:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.208 18:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.208 "name": "Existed_Raid", 00:14:02.208 "uuid": "fa0985a7-94bd-4384-94a2-21751368a4a0", 00:14:02.208 "strip_size_kb": 64, 00:14:02.208 "state": "offline", 00:14:02.208 "raid_level": "concat", 00:14:02.208 "superblock": false, 00:14:02.208 "num_base_bdevs": 4, 00:14:02.208 "num_base_bdevs_discovered": 3, 00:14:02.208 "num_base_bdevs_operational": 3, 00:14:02.208 "base_bdevs_list": [ 00:14:02.208 { 00:14:02.208 "name": null, 00:14:02.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.208 "is_configured": false, 00:14:02.208 "data_offset": 0, 00:14:02.208 "data_size": 65536 00:14:02.208 }, 00:14:02.208 { 00:14:02.208 "name": "BaseBdev2", 00:14:02.208 "uuid": "82b80d23-926d-4e2e-b120-8e362d8f12f6", 00:14:02.208 "is_configured": true, 00:14:02.208 "data_offset": 0, 00:14:02.208 "data_size": 65536 00:14:02.208 }, 00:14:02.208 { 00:14:02.208 "name": "BaseBdev3", 00:14:02.208 "uuid": "3fcd9f75-ef57-476e-94fa-22d233edc296", 00:14:02.208 "is_configured": true, 00:14:02.208 "data_offset": 0, 00:14:02.208 "data_size": 65536 00:14:02.208 }, 00:14:02.208 { 00:14:02.208 "name": "BaseBdev4", 00:14:02.208 "uuid": "1efda9de-9071-41f4-912b-105995c27143", 00:14:02.208 "is_configured": true, 00:14:02.208 "data_offset": 0, 00:14:02.208 "data_size": 65536 00:14:02.208 } 00:14:02.208 ] 00:14:02.208 }' 00:14:02.208 18:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.208 18:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.466 18:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:02.466 18:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:02.466 18:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.466 18:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:02.466 18:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.466 18:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.466 18:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.725 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:02.725 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:02.725 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:02.725 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.725 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.725 [2024-12-06 18:12:28.011545] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:02.725 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.725 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:02.725 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:02.725 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.725 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.725 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.725 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:02.725 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.726 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:02.726 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:02.726 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:02.726 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.726 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.726 [2024-12-06 18:12:28.160033] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:02.985 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.985 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:02.985 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:02.985 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.985 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.985 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.985 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:02.985 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.985 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:02.985 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:02.985 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:02.985 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.985 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.985 [2024-12-06 18:12:28.313290] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:02.985 [2024-12-06 18:12:28.313375] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:02.985 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.985 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:02.985 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:02.985 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.985 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.985 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.985 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:02.985 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.985 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:02.985 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:02.985 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:02.985 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:02.985 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:02.985 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:02.985 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.985 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.245 BaseBdev2 00:14:03.245 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.245 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:03.245 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:03.245 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:03.245 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:03.245 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:03.245 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:03.245 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:03.245 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.245 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.245 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.245 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:03.245 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.245 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.245 [ 00:14:03.245 { 00:14:03.245 "name": "BaseBdev2", 00:14:03.245 "aliases": [ 00:14:03.245 "5b2c1b96-f76c-4cfd-851d-2b9df774a23b" 00:14:03.245 ], 00:14:03.245 "product_name": "Malloc disk", 00:14:03.245 "block_size": 512, 00:14:03.245 "num_blocks": 65536, 00:14:03.245 "uuid": "5b2c1b96-f76c-4cfd-851d-2b9df774a23b", 00:14:03.245 "assigned_rate_limits": { 00:14:03.245 "rw_ios_per_sec": 0, 00:14:03.245 "rw_mbytes_per_sec": 0, 00:14:03.245 "r_mbytes_per_sec": 0, 00:14:03.245 "w_mbytes_per_sec": 0 00:14:03.245 }, 00:14:03.245 "claimed": false, 00:14:03.245 "zoned": false, 00:14:03.245 "supported_io_types": { 00:14:03.245 "read": true, 00:14:03.245 "write": true, 00:14:03.245 "unmap": true, 00:14:03.245 "flush": true, 00:14:03.245 "reset": true, 00:14:03.245 "nvme_admin": false, 00:14:03.245 "nvme_io": false, 00:14:03.245 "nvme_io_md": false, 00:14:03.245 "write_zeroes": true, 00:14:03.245 "zcopy": true, 00:14:03.245 "get_zone_info": false, 00:14:03.245 "zone_management": false, 00:14:03.245 "zone_append": false, 00:14:03.245 "compare": false, 00:14:03.245 "compare_and_write": false, 00:14:03.245 "abort": true, 00:14:03.245 "seek_hole": false, 00:14:03.245 "seek_data": false, 00:14:03.245 "copy": true, 00:14:03.245 "nvme_iov_md": false 00:14:03.245 }, 00:14:03.245 "memory_domains": [ 00:14:03.245 { 00:14:03.245 "dma_device_id": "system", 00:14:03.245 "dma_device_type": 1 00:14:03.245 }, 00:14:03.245 { 00:14:03.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.245 "dma_device_type": 2 00:14:03.245 } 00:14:03.245 ], 00:14:03.245 "driver_specific": {} 00:14:03.245 } 00:14:03.245 ] 00:14:03.245 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.245 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:03.245 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:03.245 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:03.245 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:03.245 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.245 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.245 BaseBdev3 00:14:03.245 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.245 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:03.245 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:03.245 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.246 [ 00:14:03.246 { 00:14:03.246 "name": "BaseBdev3", 00:14:03.246 "aliases": [ 00:14:03.246 "4b129fb1-0be8-421b-afe9-56e122037f01" 00:14:03.246 ], 00:14:03.246 "product_name": "Malloc disk", 00:14:03.246 "block_size": 512, 00:14:03.246 "num_blocks": 65536, 00:14:03.246 "uuid": "4b129fb1-0be8-421b-afe9-56e122037f01", 00:14:03.246 "assigned_rate_limits": { 00:14:03.246 "rw_ios_per_sec": 0, 00:14:03.246 "rw_mbytes_per_sec": 0, 00:14:03.246 "r_mbytes_per_sec": 0, 00:14:03.246 "w_mbytes_per_sec": 0 00:14:03.246 }, 00:14:03.246 "claimed": false, 00:14:03.246 "zoned": false, 00:14:03.246 "supported_io_types": { 00:14:03.246 "read": true, 00:14:03.246 "write": true, 00:14:03.246 "unmap": true, 00:14:03.246 "flush": true, 00:14:03.246 "reset": true, 00:14:03.246 "nvme_admin": false, 00:14:03.246 "nvme_io": false, 00:14:03.246 "nvme_io_md": false, 00:14:03.246 "write_zeroes": true, 00:14:03.246 "zcopy": true, 00:14:03.246 "get_zone_info": false, 00:14:03.246 "zone_management": false, 00:14:03.246 "zone_append": false, 00:14:03.246 "compare": false, 00:14:03.246 "compare_and_write": false, 00:14:03.246 "abort": true, 00:14:03.246 "seek_hole": false, 00:14:03.246 "seek_data": false, 00:14:03.246 "copy": true, 00:14:03.246 "nvme_iov_md": false 00:14:03.246 }, 00:14:03.246 "memory_domains": [ 00:14:03.246 { 00:14:03.246 "dma_device_id": "system", 00:14:03.246 "dma_device_type": 1 00:14:03.246 }, 00:14:03.246 { 00:14:03.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.246 "dma_device_type": 2 00:14:03.246 } 00:14:03.246 ], 00:14:03.246 "driver_specific": {} 00:14:03.246 } 00:14:03.246 ] 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.246 BaseBdev4 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.246 [ 00:14:03.246 { 00:14:03.246 "name": "BaseBdev4", 00:14:03.246 "aliases": [ 00:14:03.246 "ca6409e7-5237-4380-9859-f2dfedb14297" 00:14:03.246 ], 00:14:03.246 "product_name": "Malloc disk", 00:14:03.246 "block_size": 512, 00:14:03.246 "num_blocks": 65536, 00:14:03.246 "uuid": "ca6409e7-5237-4380-9859-f2dfedb14297", 00:14:03.246 "assigned_rate_limits": { 00:14:03.246 "rw_ios_per_sec": 0, 00:14:03.246 "rw_mbytes_per_sec": 0, 00:14:03.246 "r_mbytes_per_sec": 0, 00:14:03.246 "w_mbytes_per_sec": 0 00:14:03.246 }, 00:14:03.246 "claimed": false, 00:14:03.246 "zoned": false, 00:14:03.246 "supported_io_types": { 00:14:03.246 "read": true, 00:14:03.246 "write": true, 00:14:03.246 "unmap": true, 00:14:03.246 "flush": true, 00:14:03.246 "reset": true, 00:14:03.246 "nvme_admin": false, 00:14:03.246 "nvme_io": false, 00:14:03.246 "nvme_io_md": false, 00:14:03.246 "write_zeroes": true, 00:14:03.246 "zcopy": true, 00:14:03.246 "get_zone_info": false, 00:14:03.246 "zone_management": false, 00:14:03.246 "zone_append": false, 00:14:03.246 "compare": false, 00:14:03.246 "compare_and_write": false, 00:14:03.246 "abort": true, 00:14:03.246 "seek_hole": false, 00:14:03.246 "seek_data": false, 00:14:03.246 "copy": true, 00:14:03.246 "nvme_iov_md": false 00:14:03.246 }, 00:14:03.246 "memory_domains": [ 00:14:03.246 { 00:14:03.246 "dma_device_id": "system", 00:14:03.246 "dma_device_type": 1 00:14:03.246 }, 00:14:03.246 { 00:14:03.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.246 "dma_device_type": 2 00:14:03.246 } 00:14:03.246 ], 00:14:03.246 "driver_specific": {} 00:14:03.246 } 00:14:03.246 ] 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.246 [2024-12-06 18:12:28.687775] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:03.246 [2024-12-06 18:12:28.688019] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:03.246 [2024-12-06 18:12:28.688065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:03.246 [2024-12-06 18:12:28.690570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:03.246 [2024-12-06 18:12:28.690634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.246 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.246 "name": "Existed_Raid", 00:14:03.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.246 "strip_size_kb": 64, 00:14:03.246 "state": "configuring", 00:14:03.246 "raid_level": "concat", 00:14:03.246 "superblock": false, 00:14:03.246 "num_base_bdevs": 4, 00:14:03.246 "num_base_bdevs_discovered": 3, 00:14:03.246 "num_base_bdevs_operational": 4, 00:14:03.246 "base_bdevs_list": [ 00:14:03.246 { 00:14:03.246 "name": "BaseBdev1", 00:14:03.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.246 "is_configured": false, 00:14:03.246 "data_offset": 0, 00:14:03.246 "data_size": 0 00:14:03.246 }, 00:14:03.246 { 00:14:03.246 "name": "BaseBdev2", 00:14:03.246 "uuid": "5b2c1b96-f76c-4cfd-851d-2b9df774a23b", 00:14:03.246 "is_configured": true, 00:14:03.246 "data_offset": 0, 00:14:03.246 "data_size": 65536 00:14:03.246 }, 00:14:03.246 { 00:14:03.247 "name": "BaseBdev3", 00:14:03.247 "uuid": "4b129fb1-0be8-421b-afe9-56e122037f01", 00:14:03.247 "is_configured": true, 00:14:03.247 "data_offset": 0, 00:14:03.247 "data_size": 65536 00:14:03.247 }, 00:14:03.247 { 00:14:03.247 "name": "BaseBdev4", 00:14:03.247 "uuid": "ca6409e7-5237-4380-9859-f2dfedb14297", 00:14:03.247 "is_configured": true, 00:14:03.247 "data_offset": 0, 00:14:03.247 "data_size": 65536 00:14:03.247 } 00:14:03.247 ] 00:14:03.247 }' 00:14:03.247 18:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.247 18:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.828 18:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:03.828 18:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.828 18:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.828 [2024-12-06 18:12:29.232017] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:03.829 18:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.829 18:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:03.829 18:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:03.829 18:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:03.829 18:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:03.829 18:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:03.829 18:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:03.829 18:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.829 18:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.829 18:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.829 18:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.829 18:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.829 18:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.829 18:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.829 18:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:03.829 18:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.829 18:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.829 "name": "Existed_Raid", 00:14:03.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.829 "strip_size_kb": 64, 00:14:03.829 "state": "configuring", 00:14:03.829 "raid_level": "concat", 00:14:03.829 "superblock": false, 00:14:03.829 "num_base_bdevs": 4, 00:14:03.829 "num_base_bdevs_discovered": 2, 00:14:03.829 "num_base_bdevs_operational": 4, 00:14:03.829 "base_bdevs_list": [ 00:14:03.829 { 00:14:03.829 "name": "BaseBdev1", 00:14:03.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.829 "is_configured": false, 00:14:03.829 "data_offset": 0, 00:14:03.829 "data_size": 0 00:14:03.829 }, 00:14:03.829 { 00:14:03.829 "name": null, 00:14:03.829 "uuid": "5b2c1b96-f76c-4cfd-851d-2b9df774a23b", 00:14:03.829 "is_configured": false, 00:14:03.829 "data_offset": 0, 00:14:03.829 "data_size": 65536 00:14:03.829 }, 00:14:03.829 { 00:14:03.829 "name": "BaseBdev3", 00:14:03.829 "uuid": "4b129fb1-0be8-421b-afe9-56e122037f01", 00:14:03.829 "is_configured": true, 00:14:03.829 "data_offset": 0, 00:14:03.829 "data_size": 65536 00:14:03.829 }, 00:14:03.829 { 00:14:03.829 "name": "BaseBdev4", 00:14:03.829 "uuid": "ca6409e7-5237-4380-9859-f2dfedb14297", 00:14:03.829 "is_configured": true, 00:14:03.829 "data_offset": 0, 00:14:03.829 "data_size": 65536 00:14:03.829 } 00:14:03.829 ] 00:14:03.829 }' 00:14:03.829 18:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.829 18:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.397 18:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.397 18:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.397 18:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.397 18:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:04.397 18:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.397 18:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:04.397 18:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:04.397 18:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.397 18:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.397 [2024-12-06 18:12:29.890364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:04.397 BaseBdev1 00:14:04.397 18:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.397 18:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:04.397 18:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:04.397 18:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:04.397 18:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:04.397 18:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:04.397 18:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:04.397 18:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:04.397 18:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.397 18:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.397 18:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.397 18:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:04.397 18:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.397 18:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.655 [ 00:14:04.655 { 00:14:04.655 "name": "BaseBdev1", 00:14:04.655 "aliases": [ 00:14:04.655 "43e452cc-e9fb-4a1a-8555-62a3dbfc260d" 00:14:04.655 ], 00:14:04.655 "product_name": "Malloc disk", 00:14:04.655 "block_size": 512, 00:14:04.655 "num_blocks": 65536, 00:14:04.655 "uuid": "43e452cc-e9fb-4a1a-8555-62a3dbfc260d", 00:14:04.655 "assigned_rate_limits": { 00:14:04.655 "rw_ios_per_sec": 0, 00:14:04.655 "rw_mbytes_per_sec": 0, 00:14:04.655 "r_mbytes_per_sec": 0, 00:14:04.655 "w_mbytes_per_sec": 0 00:14:04.655 }, 00:14:04.655 "claimed": true, 00:14:04.655 "claim_type": "exclusive_write", 00:14:04.655 "zoned": false, 00:14:04.655 "supported_io_types": { 00:14:04.655 "read": true, 00:14:04.655 "write": true, 00:14:04.655 "unmap": true, 00:14:04.655 "flush": true, 00:14:04.656 "reset": true, 00:14:04.656 "nvme_admin": false, 00:14:04.656 "nvme_io": false, 00:14:04.656 "nvme_io_md": false, 00:14:04.656 "write_zeroes": true, 00:14:04.656 "zcopy": true, 00:14:04.656 "get_zone_info": false, 00:14:04.656 "zone_management": false, 00:14:04.656 "zone_append": false, 00:14:04.656 "compare": false, 00:14:04.656 "compare_and_write": false, 00:14:04.656 "abort": true, 00:14:04.656 "seek_hole": false, 00:14:04.656 "seek_data": false, 00:14:04.656 "copy": true, 00:14:04.656 "nvme_iov_md": false 00:14:04.656 }, 00:14:04.656 "memory_domains": [ 00:14:04.656 { 00:14:04.656 "dma_device_id": "system", 00:14:04.656 "dma_device_type": 1 00:14:04.656 }, 00:14:04.656 { 00:14:04.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.656 "dma_device_type": 2 00:14:04.656 } 00:14:04.656 ], 00:14:04.656 "driver_specific": {} 00:14:04.656 } 00:14:04.656 ] 00:14:04.656 18:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.656 18:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:04.656 18:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:04.656 18:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:04.656 18:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:04.656 18:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:04.656 18:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:04.656 18:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:04.656 18:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.656 18:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.656 18:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.656 18:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.656 18:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.656 18:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.656 18:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.656 18:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:04.656 18:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.656 18:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.656 "name": "Existed_Raid", 00:14:04.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.656 "strip_size_kb": 64, 00:14:04.656 "state": "configuring", 00:14:04.656 "raid_level": "concat", 00:14:04.656 "superblock": false, 00:14:04.656 "num_base_bdevs": 4, 00:14:04.656 "num_base_bdevs_discovered": 3, 00:14:04.656 "num_base_bdevs_operational": 4, 00:14:04.656 "base_bdevs_list": [ 00:14:04.656 { 00:14:04.656 "name": "BaseBdev1", 00:14:04.656 "uuid": "43e452cc-e9fb-4a1a-8555-62a3dbfc260d", 00:14:04.656 "is_configured": true, 00:14:04.656 "data_offset": 0, 00:14:04.656 "data_size": 65536 00:14:04.656 }, 00:14:04.656 { 00:14:04.656 "name": null, 00:14:04.656 "uuid": "5b2c1b96-f76c-4cfd-851d-2b9df774a23b", 00:14:04.656 "is_configured": false, 00:14:04.656 "data_offset": 0, 00:14:04.656 "data_size": 65536 00:14:04.656 }, 00:14:04.656 { 00:14:04.656 "name": "BaseBdev3", 00:14:04.656 "uuid": "4b129fb1-0be8-421b-afe9-56e122037f01", 00:14:04.656 "is_configured": true, 00:14:04.656 "data_offset": 0, 00:14:04.656 "data_size": 65536 00:14:04.656 }, 00:14:04.656 { 00:14:04.656 "name": "BaseBdev4", 00:14:04.656 "uuid": "ca6409e7-5237-4380-9859-f2dfedb14297", 00:14:04.656 "is_configured": true, 00:14:04.656 "data_offset": 0, 00:14:04.656 "data_size": 65536 00:14:04.656 } 00:14:04.656 ] 00:14:04.656 }' 00:14:04.656 18:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.656 18:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.226 18:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.226 18:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.226 18:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:05.226 18:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.226 18:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.226 18:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:05.226 18:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:05.226 18:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.226 18:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.226 [2024-12-06 18:12:30.530690] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:05.226 18:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.226 18:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:05.226 18:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:05.226 18:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:05.226 18:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:05.226 18:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:05.226 18:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:05.226 18:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.226 18:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.226 18:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.226 18:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.226 18:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.226 18:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:05.226 18:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.226 18:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.226 18:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.226 18:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.226 "name": "Existed_Raid", 00:14:05.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.226 "strip_size_kb": 64, 00:14:05.226 "state": "configuring", 00:14:05.227 "raid_level": "concat", 00:14:05.227 "superblock": false, 00:14:05.227 "num_base_bdevs": 4, 00:14:05.227 "num_base_bdevs_discovered": 2, 00:14:05.227 "num_base_bdevs_operational": 4, 00:14:05.227 "base_bdevs_list": [ 00:14:05.227 { 00:14:05.227 "name": "BaseBdev1", 00:14:05.227 "uuid": "43e452cc-e9fb-4a1a-8555-62a3dbfc260d", 00:14:05.227 "is_configured": true, 00:14:05.227 "data_offset": 0, 00:14:05.227 "data_size": 65536 00:14:05.227 }, 00:14:05.227 { 00:14:05.227 "name": null, 00:14:05.227 "uuid": "5b2c1b96-f76c-4cfd-851d-2b9df774a23b", 00:14:05.227 "is_configured": false, 00:14:05.227 "data_offset": 0, 00:14:05.227 "data_size": 65536 00:14:05.227 }, 00:14:05.227 { 00:14:05.227 "name": null, 00:14:05.227 "uuid": "4b129fb1-0be8-421b-afe9-56e122037f01", 00:14:05.227 "is_configured": false, 00:14:05.227 "data_offset": 0, 00:14:05.227 "data_size": 65536 00:14:05.227 }, 00:14:05.227 { 00:14:05.227 "name": "BaseBdev4", 00:14:05.227 "uuid": "ca6409e7-5237-4380-9859-f2dfedb14297", 00:14:05.227 "is_configured": true, 00:14:05.227 "data_offset": 0, 00:14:05.227 "data_size": 65536 00:14:05.227 } 00:14:05.227 ] 00:14:05.227 }' 00:14:05.227 18:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.227 18:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.794 18:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.794 18:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:05.794 18:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.794 18:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.794 18:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.795 18:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:05.795 18:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:05.795 18:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.795 18:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.795 [2024-12-06 18:12:31.106891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:05.795 18:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.795 18:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:05.795 18:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:05.795 18:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:05.795 18:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:05.795 18:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:05.795 18:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:05.795 18:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.795 18:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.795 18:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.795 18:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.795 18:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.795 18:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.795 18:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.795 18:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:05.795 18:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.795 18:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.795 "name": "Existed_Raid", 00:14:05.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.795 "strip_size_kb": 64, 00:14:05.795 "state": "configuring", 00:14:05.795 "raid_level": "concat", 00:14:05.795 "superblock": false, 00:14:05.795 "num_base_bdevs": 4, 00:14:05.795 "num_base_bdevs_discovered": 3, 00:14:05.795 "num_base_bdevs_operational": 4, 00:14:05.795 "base_bdevs_list": [ 00:14:05.795 { 00:14:05.795 "name": "BaseBdev1", 00:14:05.795 "uuid": "43e452cc-e9fb-4a1a-8555-62a3dbfc260d", 00:14:05.795 "is_configured": true, 00:14:05.795 "data_offset": 0, 00:14:05.795 "data_size": 65536 00:14:05.795 }, 00:14:05.795 { 00:14:05.795 "name": null, 00:14:05.795 "uuid": "5b2c1b96-f76c-4cfd-851d-2b9df774a23b", 00:14:05.795 "is_configured": false, 00:14:05.795 "data_offset": 0, 00:14:05.795 "data_size": 65536 00:14:05.795 }, 00:14:05.795 { 00:14:05.795 "name": "BaseBdev3", 00:14:05.795 "uuid": "4b129fb1-0be8-421b-afe9-56e122037f01", 00:14:05.795 "is_configured": true, 00:14:05.795 "data_offset": 0, 00:14:05.795 "data_size": 65536 00:14:05.795 }, 00:14:05.795 { 00:14:05.795 "name": "BaseBdev4", 00:14:05.795 "uuid": "ca6409e7-5237-4380-9859-f2dfedb14297", 00:14:05.795 "is_configured": true, 00:14:05.795 "data_offset": 0, 00:14:05.795 "data_size": 65536 00:14:05.795 } 00:14:05.795 ] 00:14:05.795 }' 00:14:05.795 18:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.795 18:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.362 18:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.362 18:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.362 18:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.362 18:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:06.362 18:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.362 18:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:06.362 18:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:06.362 18:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.362 18:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.362 [2024-12-06 18:12:31.687114] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:06.362 18:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.362 18:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:06.362 18:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:06.362 18:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:06.362 18:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:06.362 18:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:06.362 18:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:06.362 18:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.362 18:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.362 18:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.362 18:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.362 18:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.362 18:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:06.362 18:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.362 18:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.362 18:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.362 18:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.362 "name": "Existed_Raid", 00:14:06.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.362 "strip_size_kb": 64, 00:14:06.362 "state": "configuring", 00:14:06.362 "raid_level": "concat", 00:14:06.362 "superblock": false, 00:14:06.362 "num_base_bdevs": 4, 00:14:06.362 "num_base_bdevs_discovered": 2, 00:14:06.362 "num_base_bdevs_operational": 4, 00:14:06.362 "base_bdevs_list": [ 00:14:06.362 { 00:14:06.362 "name": null, 00:14:06.362 "uuid": "43e452cc-e9fb-4a1a-8555-62a3dbfc260d", 00:14:06.362 "is_configured": false, 00:14:06.362 "data_offset": 0, 00:14:06.362 "data_size": 65536 00:14:06.362 }, 00:14:06.362 { 00:14:06.362 "name": null, 00:14:06.362 "uuid": "5b2c1b96-f76c-4cfd-851d-2b9df774a23b", 00:14:06.362 "is_configured": false, 00:14:06.362 "data_offset": 0, 00:14:06.362 "data_size": 65536 00:14:06.362 }, 00:14:06.362 { 00:14:06.362 "name": "BaseBdev3", 00:14:06.362 "uuid": "4b129fb1-0be8-421b-afe9-56e122037f01", 00:14:06.362 "is_configured": true, 00:14:06.362 "data_offset": 0, 00:14:06.362 "data_size": 65536 00:14:06.362 }, 00:14:06.362 { 00:14:06.362 "name": "BaseBdev4", 00:14:06.362 "uuid": "ca6409e7-5237-4380-9859-f2dfedb14297", 00:14:06.362 "is_configured": true, 00:14:06.362 "data_offset": 0, 00:14:06.362 "data_size": 65536 00:14:06.362 } 00:14:06.362 ] 00:14:06.362 }' 00:14:06.362 18:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.362 18:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.929 18:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:06.929 18:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.929 18:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.929 18:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.929 18:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.929 18:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:06.929 18:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:06.929 18:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.929 18:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.929 [2024-12-06 18:12:32.331155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:06.929 18:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.929 18:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:06.929 18:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:06.930 18:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:06.930 18:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:06.930 18:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:06.930 18:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:06.930 18:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.930 18:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.930 18:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.930 18:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.930 18:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:06.930 18:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.930 18:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.930 18:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.930 18:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.930 18:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.930 "name": "Existed_Raid", 00:14:06.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.930 "strip_size_kb": 64, 00:14:06.930 "state": "configuring", 00:14:06.930 "raid_level": "concat", 00:14:06.930 "superblock": false, 00:14:06.930 "num_base_bdevs": 4, 00:14:06.930 "num_base_bdevs_discovered": 3, 00:14:06.930 "num_base_bdevs_operational": 4, 00:14:06.930 "base_bdevs_list": [ 00:14:06.930 { 00:14:06.930 "name": null, 00:14:06.930 "uuid": "43e452cc-e9fb-4a1a-8555-62a3dbfc260d", 00:14:06.930 "is_configured": false, 00:14:06.930 "data_offset": 0, 00:14:06.930 "data_size": 65536 00:14:06.930 }, 00:14:06.930 { 00:14:06.930 "name": "BaseBdev2", 00:14:06.930 "uuid": "5b2c1b96-f76c-4cfd-851d-2b9df774a23b", 00:14:06.930 "is_configured": true, 00:14:06.930 "data_offset": 0, 00:14:06.930 "data_size": 65536 00:14:06.930 }, 00:14:06.930 { 00:14:06.930 "name": "BaseBdev3", 00:14:06.930 "uuid": "4b129fb1-0be8-421b-afe9-56e122037f01", 00:14:06.930 "is_configured": true, 00:14:06.930 "data_offset": 0, 00:14:06.930 "data_size": 65536 00:14:06.930 }, 00:14:06.930 { 00:14:06.930 "name": "BaseBdev4", 00:14:06.930 "uuid": "ca6409e7-5237-4380-9859-f2dfedb14297", 00:14:06.930 "is_configured": true, 00:14:06.930 "data_offset": 0, 00:14:06.930 "data_size": 65536 00:14:06.930 } 00:14:06.930 ] 00:14:06.930 }' 00:14:06.930 18:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.930 18:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.497 18:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.497 18:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.497 18:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:07.497 18:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.497 18:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.497 18:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:07.497 18:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:07.497 18:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.497 18:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.497 18:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.497 18:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.497 18:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 43e452cc-e9fb-4a1a-8555-62a3dbfc260d 00:14:07.497 18:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.497 18:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.497 [2024-12-06 18:12:32.941928] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:07.497 [2024-12-06 18:12:32.941988] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:07.497 [2024-12-06 18:12:32.942001] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:14:07.497 [2024-12-06 18:12:32.942331] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:07.497 [2024-12-06 18:12:32.942513] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:07.497 [2024-12-06 18:12:32.942533] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:07.497 [2024-12-06 18:12:32.942854] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:07.497 NewBaseBdev 00:14:07.497 18:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.497 18:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:07.497 18:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:07.497 18:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:07.497 18:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:07.497 18:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:07.497 18:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:07.497 18:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:07.497 18:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.497 18:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.497 18:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.497 18:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:07.497 18:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.497 18:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.497 [ 00:14:07.497 { 00:14:07.497 "name": "NewBaseBdev", 00:14:07.497 "aliases": [ 00:14:07.497 "43e452cc-e9fb-4a1a-8555-62a3dbfc260d" 00:14:07.497 ], 00:14:07.497 "product_name": "Malloc disk", 00:14:07.497 "block_size": 512, 00:14:07.497 "num_blocks": 65536, 00:14:07.497 "uuid": "43e452cc-e9fb-4a1a-8555-62a3dbfc260d", 00:14:07.497 "assigned_rate_limits": { 00:14:07.497 "rw_ios_per_sec": 0, 00:14:07.497 "rw_mbytes_per_sec": 0, 00:14:07.497 "r_mbytes_per_sec": 0, 00:14:07.497 "w_mbytes_per_sec": 0 00:14:07.497 }, 00:14:07.497 "claimed": true, 00:14:07.497 "claim_type": "exclusive_write", 00:14:07.497 "zoned": false, 00:14:07.497 "supported_io_types": { 00:14:07.497 "read": true, 00:14:07.497 "write": true, 00:14:07.497 "unmap": true, 00:14:07.497 "flush": true, 00:14:07.497 "reset": true, 00:14:07.497 "nvme_admin": false, 00:14:07.497 "nvme_io": false, 00:14:07.497 "nvme_io_md": false, 00:14:07.497 "write_zeroes": true, 00:14:07.497 "zcopy": true, 00:14:07.497 "get_zone_info": false, 00:14:07.497 "zone_management": false, 00:14:07.497 "zone_append": false, 00:14:07.497 "compare": false, 00:14:07.497 "compare_and_write": false, 00:14:07.497 "abort": true, 00:14:07.497 "seek_hole": false, 00:14:07.497 "seek_data": false, 00:14:07.497 "copy": true, 00:14:07.497 "nvme_iov_md": false 00:14:07.497 }, 00:14:07.497 "memory_domains": [ 00:14:07.497 { 00:14:07.497 "dma_device_id": "system", 00:14:07.497 "dma_device_type": 1 00:14:07.497 }, 00:14:07.497 { 00:14:07.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:07.497 "dma_device_type": 2 00:14:07.497 } 00:14:07.497 ], 00:14:07.497 "driver_specific": {} 00:14:07.497 } 00:14:07.497 ] 00:14:07.497 18:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.497 18:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:07.497 18:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:07.497 18:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:07.497 18:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:07.497 18:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:07.497 18:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:07.497 18:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:07.497 18:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.497 18:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.497 18:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.497 18:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.497 18:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.497 18:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:07.497 18:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.497 18:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.497 18:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.755 18:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.755 "name": "Existed_Raid", 00:14:07.755 "uuid": "676a57f8-c387-4aee-86e9-1b11298a1e99", 00:14:07.755 "strip_size_kb": 64, 00:14:07.755 "state": "online", 00:14:07.755 "raid_level": "concat", 00:14:07.755 "superblock": false, 00:14:07.755 "num_base_bdevs": 4, 00:14:07.755 "num_base_bdevs_discovered": 4, 00:14:07.755 "num_base_bdevs_operational": 4, 00:14:07.755 "base_bdevs_list": [ 00:14:07.755 { 00:14:07.755 "name": "NewBaseBdev", 00:14:07.755 "uuid": "43e452cc-e9fb-4a1a-8555-62a3dbfc260d", 00:14:07.755 "is_configured": true, 00:14:07.755 "data_offset": 0, 00:14:07.755 "data_size": 65536 00:14:07.755 }, 00:14:07.755 { 00:14:07.755 "name": "BaseBdev2", 00:14:07.755 "uuid": "5b2c1b96-f76c-4cfd-851d-2b9df774a23b", 00:14:07.755 "is_configured": true, 00:14:07.755 "data_offset": 0, 00:14:07.755 "data_size": 65536 00:14:07.755 }, 00:14:07.755 { 00:14:07.755 "name": "BaseBdev3", 00:14:07.755 "uuid": "4b129fb1-0be8-421b-afe9-56e122037f01", 00:14:07.755 "is_configured": true, 00:14:07.755 "data_offset": 0, 00:14:07.755 "data_size": 65536 00:14:07.755 }, 00:14:07.755 { 00:14:07.755 "name": "BaseBdev4", 00:14:07.755 "uuid": "ca6409e7-5237-4380-9859-f2dfedb14297", 00:14:07.755 "is_configured": true, 00:14:07.755 "data_offset": 0, 00:14:07.755 "data_size": 65536 00:14:07.755 } 00:14:07.755 ] 00:14:07.755 }' 00:14:07.755 18:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.755 18:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.013 18:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:08.013 18:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:08.013 18:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:08.013 18:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:08.013 18:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:08.013 18:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:08.013 18:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:08.013 18:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:08.013 18:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.013 18:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.013 [2024-12-06 18:12:33.506596] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:08.272 18:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.272 18:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:08.272 "name": "Existed_Raid", 00:14:08.272 "aliases": [ 00:14:08.272 "676a57f8-c387-4aee-86e9-1b11298a1e99" 00:14:08.272 ], 00:14:08.272 "product_name": "Raid Volume", 00:14:08.272 "block_size": 512, 00:14:08.272 "num_blocks": 262144, 00:14:08.272 "uuid": "676a57f8-c387-4aee-86e9-1b11298a1e99", 00:14:08.272 "assigned_rate_limits": { 00:14:08.272 "rw_ios_per_sec": 0, 00:14:08.272 "rw_mbytes_per_sec": 0, 00:14:08.272 "r_mbytes_per_sec": 0, 00:14:08.272 "w_mbytes_per_sec": 0 00:14:08.272 }, 00:14:08.272 "claimed": false, 00:14:08.272 "zoned": false, 00:14:08.272 "supported_io_types": { 00:14:08.272 "read": true, 00:14:08.272 "write": true, 00:14:08.272 "unmap": true, 00:14:08.272 "flush": true, 00:14:08.272 "reset": true, 00:14:08.272 "nvme_admin": false, 00:14:08.272 "nvme_io": false, 00:14:08.272 "nvme_io_md": false, 00:14:08.272 "write_zeroes": true, 00:14:08.272 "zcopy": false, 00:14:08.272 "get_zone_info": false, 00:14:08.272 "zone_management": false, 00:14:08.272 "zone_append": false, 00:14:08.272 "compare": false, 00:14:08.272 "compare_and_write": false, 00:14:08.272 "abort": false, 00:14:08.272 "seek_hole": false, 00:14:08.272 "seek_data": false, 00:14:08.272 "copy": false, 00:14:08.272 "nvme_iov_md": false 00:14:08.272 }, 00:14:08.272 "memory_domains": [ 00:14:08.272 { 00:14:08.272 "dma_device_id": "system", 00:14:08.272 "dma_device_type": 1 00:14:08.272 }, 00:14:08.272 { 00:14:08.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.273 "dma_device_type": 2 00:14:08.273 }, 00:14:08.273 { 00:14:08.273 "dma_device_id": "system", 00:14:08.273 "dma_device_type": 1 00:14:08.273 }, 00:14:08.273 { 00:14:08.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.273 "dma_device_type": 2 00:14:08.273 }, 00:14:08.273 { 00:14:08.273 "dma_device_id": "system", 00:14:08.273 "dma_device_type": 1 00:14:08.273 }, 00:14:08.273 { 00:14:08.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.273 "dma_device_type": 2 00:14:08.273 }, 00:14:08.273 { 00:14:08.273 "dma_device_id": "system", 00:14:08.273 "dma_device_type": 1 00:14:08.273 }, 00:14:08.273 { 00:14:08.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.273 "dma_device_type": 2 00:14:08.273 } 00:14:08.273 ], 00:14:08.273 "driver_specific": { 00:14:08.273 "raid": { 00:14:08.273 "uuid": "676a57f8-c387-4aee-86e9-1b11298a1e99", 00:14:08.273 "strip_size_kb": 64, 00:14:08.273 "state": "online", 00:14:08.273 "raid_level": "concat", 00:14:08.273 "superblock": false, 00:14:08.273 "num_base_bdevs": 4, 00:14:08.273 "num_base_bdevs_discovered": 4, 00:14:08.273 "num_base_bdevs_operational": 4, 00:14:08.273 "base_bdevs_list": [ 00:14:08.273 { 00:14:08.273 "name": "NewBaseBdev", 00:14:08.273 "uuid": "43e452cc-e9fb-4a1a-8555-62a3dbfc260d", 00:14:08.273 "is_configured": true, 00:14:08.273 "data_offset": 0, 00:14:08.273 "data_size": 65536 00:14:08.273 }, 00:14:08.273 { 00:14:08.273 "name": "BaseBdev2", 00:14:08.273 "uuid": "5b2c1b96-f76c-4cfd-851d-2b9df774a23b", 00:14:08.273 "is_configured": true, 00:14:08.273 "data_offset": 0, 00:14:08.273 "data_size": 65536 00:14:08.273 }, 00:14:08.273 { 00:14:08.273 "name": "BaseBdev3", 00:14:08.273 "uuid": "4b129fb1-0be8-421b-afe9-56e122037f01", 00:14:08.273 "is_configured": true, 00:14:08.273 "data_offset": 0, 00:14:08.273 "data_size": 65536 00:14:08.273 }, 00:14:08.273 { 00:14:08.273 "name": "BaseBdev4", 00:14:08.273 "uuid": "ca6409e7-5237-4380-9859-f2dfedb14297", 00:14:08.273 "is_configured": true, 00:14:08.273 "data_offset": 0, 00:14:08.273 "data_size": 65536 00:14:08.273 } 00:14:08.273 ] 00:14:08.273 } 00:14:08.273 } 00:14:08.273 }' 00:14:08.273 18:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:08.273 18:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:08.273 BaseBdev2 00:14:08.273 BaseBdev3 00:14:08.273 BaseBdev4' 00:14:08.273 18:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:08.273 18:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:08.273 18:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:08.273 18:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:08.273 18:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.273 18:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.273 18:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:08.273 18:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.273 18:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:08.273 18:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:08.273 18:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:08.273 18:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:08.273 18:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:08.273 18:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.273 18:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.273 18:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.273 18:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:08.273 18:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:08.273 18:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:08.273 18:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:08.273 18:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:08.273 18:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.273 18:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.273 18:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.531 18:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:08.532 18:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:08.532 18:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:08.532 18:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:08.532 18:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.532 18:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.532 18:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:08.532 18:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.532 18:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:08.532 18:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:08.532 18:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:08.532 18:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.532 18:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.532 [2024-12-06 18:12:33.874246] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:08.532 [2024-12-06 18:12:33.874283] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:08.532 [2024-12-06 18:12:33.874372] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:08.532 [2024-12-06 18:12:33.874455] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:08.532 [2024-12-06 18:12:33.874470] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:08.532 18:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.532 18:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71453 00:14:08.532 18:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71453 ']' 00:14:08.532 18:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71453 00:14:08.532 18:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:08.532 18:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:08.532 18:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71453 00:14:08.532 killing process with pid 71453 00:14:08.532 18:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:08.532 18:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:08.532 18:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71453' 00:14:08.532 18:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71453 00:14:08.532 [2024-12-06 18:12:33.914537] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:08.532 18:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71453 00:14:08.790 [2024-12-06 18:12:34.262977] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:10.177 ************************************ 00:14:10.177 END TEST raid_state_function_test 00:14:10.177 ************************************ 00:14:10.177 18:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:10.177 00:14:10.177 real 0m12.812s 00:14:10.177 user 0m21.282s 00:14:10.177 sys 0m1.792s 00:14:10.177 18:12:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:10.177 18:12:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.177 18:12:35 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:14:10.177 18:12:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:10.177 18:12:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:10.177 18:12:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:10.177 ************************************ 00:14:10.177 START TEST raid_state_function_test_sb 00:14:10.177 ************************************ 00:14:10.177 18:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:14:10.177 18:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:14:10.177 18:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:10.177 18:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:10.177 18:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:10.177 18:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:10.177 18:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:10.177 18:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:10.177 18:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:10.177 18:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:10.177 18:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:10.177 18:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:10.177 18:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:10.177 18:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:10.177 18:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:10.177 18:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:10.177 18:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:10.177 18:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:10.177 18:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:10.177 18:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:10.177 18:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:10.177 18:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:10.177 18:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:10.177 18:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:10.177 18:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:10.177 18:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:14:10.177 18:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:10.177 18:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:10.177 18:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:10.177 18:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:10.177 Process raid pid: 72130 00:14:10.177 18:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72130 00:14:10.177 18:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72130' 00:14:10.177 18:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:10.177 18:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72130 00:14:10.177 18:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 72130 ']' 00:14:10.177 18:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.177 18:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:10.177 18:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.177 18:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:10.177 18:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.177 [2024-12-06 18:12:35.436901] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:14:10.177 [2024-12-06 18:12:35.437061] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:10.177 [2024-12-06 18:12:35.612009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.435 [2024-12-06 18:12:35.744053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.694 [2024-12-06 18:12:35.953779] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:10.694 [2024-12-06 18:12:35.953839] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:10.953 18:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:10.953 18:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:10.953 18:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:10.953 18:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.953 18:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.953 [2024-12-06 18:12:36.387852] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:10.953 [2024-12-06 18:12:36.387949] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:10.953 [2024-12-06 18:12:36.387966] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:10.953 [2024-12-06 18:12:36.387983] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:10.953 [2024-12-06 18:12:36.387993] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:10.953 [2024-12-06 18:12:36.388007] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:10.953 [2024-12-06 18:12:36.388016] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:10.953 [2024-12-06 18:12:36.388030] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:10.953 18:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.953 18:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:10.954 18:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:10.954 18:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:10.954 18:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:10.954 18:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:10.954 18:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:10.954 18:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.954 18:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.954 18:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.954 18:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.954 18:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.954 18:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.954 18:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.954 18:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.954 18:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.954 18:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.954 "name": "Existed_Raid", 00:14:10.954 "uuid": "a3a88639-03f3-484c-bf4f-7c7b51fe3a68", 00:14:10.954 "strip_size_kb": 64, 00:14:10.954 "state": "configuring", 00:14:10.954 "raid_level": "concat", 00:14:10.954 "superblock": true, 00:14:10.954 "num_base_bdevs": 4, 00:14:10.954 "num_base_bdevs_discovered": 0, 00:14:10.954 "num_base_bdevs_operational": 4, 00:14:10.954 "base_bdevs_list": [ 00:14:10.954 { 00:14:10.954 "name": "BaseBdev1", 00:14:10.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.954 "is_configured": false, 00:14:10.954 "data_offset": 0, 00:14:10.954 "data_size": 0 00:14:10.954 }, 00:14:10.954 { 00:14:10.954 "name": "BaseBdev2", 00:14:10.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.954 "is_configured": false, 00:14:10.954 "data_offset": 0, 00:14:10.954 "data_size": 0 00:14:10.954 }, 00:14:10.954 { 00:14:10.954 "name": "BaseBdev3", 00:14:10.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.954 "is_configured": false, 00:14:10.954 "data_offset": 0, 00:14:10.954 "data_size": 0 00:14:10.954 }, 00:14:10.954 { 00:14:10.954 "name": "BaseBdev4", 00:14:10.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.954 "is_configured": false, 00:14:10.954 "data_offset": 0, 00:14:10.954 "data_size": 0 00:14:10.954 } 00:14:10.954 ] 00:14:10.954 }' 00:14:10.954 18:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.954 18:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.527 18:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:11.527 18:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.527 18:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.527 [2024-12-06 18:12:36.891986] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:11.527 [2024-12-06 18:12:36.892035] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:11.527 18:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.527 18:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:11.527 18:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.527 18:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.527 [2024-12-06 18:12:36.899996] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:11.527 [2024-12-06 18:12:36.900168] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:11.527 [2024-12-06 18:12:36.900283] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:11.527 [2024-12-06 18:12:36.900349] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:11.527 [2024-12-06 18:12:36.900440] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:11.527 [2024-12-06 18:12:36.900546] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:11.527 [2024-12-06 18:12:36.900641] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:11.527 [2024-12-06 18:12:36.900745] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:11.527 18:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.527 18:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:11.527 18:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.527 18:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.527 [2024-12-06 18:12:36.950655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:11.527 BaseBdev1 00:14:11.527 18:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.527 18:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:11.527 18:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:11.527 18:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:11.527 18:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:11.527 18:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:11.527 18:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:11.527 18:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:11.527 18:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.527 18:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.527 18:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.527 18:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:11.527 18:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.527 18:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.527 [ 00:14:11.527 { 00:14:11.527 "name": "BaseBdev1", 00:14:11.527 "aliases": [ 00:14:11.527 "153cf32f-3dc5-4c16-8427-206ec715b00b" 00:14:11.527 ], 00:14:11.527 "product_name": "Malloc disk", 00:14:11.527 "block_size": 512, 00:14:11.527 "num_blocks": 65536, 00:14:11.527 "uuid": "153cf32f-3dc5-4c16-8427-206ec715b00b", 00:14:11.527 "assigned_rate_limits": { 00:14:11.527 "rw_ios_per_sec": 0, 00:14:11.527 "rw_mbytes_per_sec": 0, 00:14:11.527 "r_mbytes_per_sec": 0, 00:14:11.527 "w_mbytes_per_sec": 0 00:14:11.527 }, 00:14:11.527 "claimed": true, 00:14:11.527 "claim_type": "exclusive_write", 00:14:11.527 "zoned": false, 00:14:11.527 "supported_io_types": { 00:14:11.527 "read": true, 00:14:11.527 "write": true, 00:14:11.527 "unmap": true, 00:14:11.527 "flush": true, 00:14:11.527 "reset": true, 00:14:11.527 "nvme_admin": false, 00:14:11.527 "nvme_io": false, 00:14:11.527 "nvme_io_md": false, 00:14:11.527 "write_zeroes": true, 00:14:11.527 "zcopy": true, 00:14:11.527 "get_zone_info": false, 00:14:11.527 "zone_management": false, 00:14:11.527 "zone_append": false, 00:14:11.527 "compare": false, 00:14:11.527 "compare_and_write": false, 00:14:11.527 "abort": true, 00:14:11.527 "seek_hole": false, 00:14:11.527 "seek_data": false, 00:14:11.527 "copy": true, 00:14:11.527 "nvme_iov_md": false 00:14:11.527 }, 00:14:11.527 "memory_domains": [ 00:14:11.527 { 00:14:11.527 "dma_device_id": "system", 00:14:11.527 "dma_device_type": 1 00:14:11.527 }, 00:14:11.527 { 00:14:11.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.527 "dma_device_type": 2 00:14:11.527 } 00:14:11.527 ], 00:14:11.527 "driver_specific": {} 00:14:11.527 } 00:14:11.527 ] 00:14:11.527 18:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.527 18:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:11.527 18:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:11.527 18:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:11.527 18:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:11.527 18:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:11.527 18:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:11.527 18:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:11.527 18:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.527 18:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.527 18:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.527 18:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.527 18:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.527 18:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:11.527 18:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.527 18:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.527 18:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.527 18:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.527 "name": "Existed_Raid", 00:14:11.527 "uuid": "30794a30-daa2-472c-919a-c72113604b88", 00:14:11.527 "strip_size_kb": 64, 00:14:11.527 "state": "configuring", 00:14:11.527 "raid_level": "concat", 00:14:11.527 "superblock": true, 00:14:11.528 "num_base_bdevs": 4, 00:14:11.528 "num_base_bdevs_discovered": 1, 00:14:11.528 "num_base_bdevs_operational": 4, 00:14:11.528 "base_bdevs_list": [ 00:14:11.528 { 00:14:11.528 "name": "BaseBdev1", 00:14:11.528 "uuid": "153cf32f-3dc5-4c16-8427-206ec715b00b", 00:14:11.528 "is_configured": true, 00:14:11.528 "data_offset": 2048, 00:14:11.528 "data_size": 63488 00:14:11.528 }, 00:14:11.528 { 00:14:11.528 "name": "BaseBdev2", 00:14:11.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.528 "is_configured": false, 00:14:11.528 "data_offset": 0, 00:14:11.528 "data_size": 0 00:14:11.528 }, 00:14:11.528 { 00:14:11.528 "name": "BaseBdev3", 00:14:11.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.528 "is_configured": false, 00:14:11.528 "data_offset": 0, 00:14:11.528 "data_size": 0 00:14:11.528 }, 00:14:11.528 { 00:14:11.528 "name": "BaseBdev4", 00:14:11.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.528 "is_configured": false, 00:14:11.528 "data_offset": 0, 00:14:11.528 "data_size": 0 00:14:11.528 } 00:14:11.528 ] 00:14:11.528 }' 00:14:11.528 18:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.528 18:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.095 18:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:12.095 18:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.095 18:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.095 [2024-12-06 18:12:37.502901] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:12.095 [2024-12-06 18:12:37.502962] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:12.095 18:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.095 18:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:12.095 18:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.095 18:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.095 [2024-12-06 18:12:37.510987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:12.095 [2024-12-06 18:12:37.513680] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:12.095 [2024-12-06 18:12:37.513757] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:12.095 [2024-12-06 18:12:37.513774] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:12.095 [2024-12-06 18:12:37.513827] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:12.095 [2024-12-06 18:12:37.513839] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:12.095 [2024-12-06 18:12:37.513854] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:12.095 18:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.095 18:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:12.095 18:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:12.095 18:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:12.095 18:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:12.095 18:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:12.095 18:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:12.095 18:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:12.095 18:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:12.095 18:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.095 18:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.095 18:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.095 18:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.095 18:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.095 18:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.095 18:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.095 18:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.095 18:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.095 18:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.095 "name": "Existed_Raid", 00:14:12.095 "uuid": "c1fae6b3-44a6-4d7a-a8a3-d9be42573f19", 00:14:12.095 "strip_size_kb": 64, 00:14:12.095 "state": "configuring", 00:14:12.095 "raid_level": "concat", 00:14:12.095 "superblock": true, 00:14:12.095 "num_base_bdevs": 4, 00:14:12.095 "num_base_bdevs_discovered": 1, 00:14:12.095 "num_base_bdevs_operational": 4, 00:14:12.095 "base_bdevs_list": [ 00:14:12.095 { 00:14:12.096 "name": "BaseBdev1", 00:14:12.096 "uuid": "153cf32f-3dc5-4c16-8427-206ec715b00b", 00:14:12.096 "is_configured": true, 00:14:12.096 "data_offset": 2048, 00:14:12.096 "data_size": 63488 00:14:12.096 }, 00:14:12.096 { 00:14:12.096 "name": "BaseBdev2", 00:14:12.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.096 "is_configured": false, 00:14:12.096 "data_offset": 0, 00:14:12.096 "data_size": 0 00:14:12.096 }, 00:14:12.096 { 00:14:12.096 "name": "BaseBdev3", 00:14:12.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.096 "is_configured": false, 00:14:12.096 "data_offset": 0, 00:14:12.096 "data_size": 0 00:14:12.096 }, 00:14:12.096 { 00:14:12.096 "name": "BaseBdev4", 00:14:12.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.096 "is_configured": false, 00:14:12.096 "data_offset": 0, 00:14:12.096 "data_size": 0 00:14:12.096 } 00:14:12.096 ] 00:14:12.096 }' 00:14:12.096 18:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.096 18:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.662 18:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:12.662 18:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.662 18:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.662 [2024-12-06 18:12:38.078507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:12.662 BaseBdev2 00:14:12.662 18:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.662 18:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:12.662 18:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:12.662 18:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:12.662 18:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:12.662 18:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:12.662 18:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:12.662 18:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:12.662 18:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.662 18:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.662 18:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.662 18:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:12.662 18:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.662 18:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.662 [ 00:14:12.662 { 00:14:12.662 "name": "BaseBdev2", 00:14:12.662 "aliases": [ 00:14:12.662 "805a807f-6f5a-4e3d-b0aa-f73718f412bf" 00:14:12.662 ], 00:14:12.662 "product_name": "Malloc disk", 00:14:12.662 "block_size": 512, 00:14:12.662 "num_blocks": 65536, 00:14:12.662 "uuid": "805a807f-6f5a-4e3d-b0aa-f73718f412bf", 00:14:12.662 "assigned_rate_limits": { 00:14:12.662 "rw_ios_per_sec": 0, 00:14:12.662 "rw_mbytes_per_sec": 0, 00:14:12.662 "r_mbytes_per_sec": 0, 00:14:12.662 "w_mbytes_per_sec": 0 00:14:12.662 }, 00:14:12.662 "claimed": true, 00:14:12.662 "claim_type": "exclusive_write", 00:14:12.662 "zoned": false, 00:14:12.662 "supported_io_types": { 00:14:12.662 "read": true, 00:14:12.662 "write": true, 00:14:12.662 "unmap": true, 00:14:12.662 "flush": true, 00:14:12.662 "reset": true, 00:14:12.662 "nvme_admin": false, 00:14:12.662 "nvme_io": false, 00:14:12.662 "nvme_io_md": false, 00:14:12.662 "write_zeroes": true, 00:14:12.662 "zcopy": true, 00:14:12.662 "get_zone_info": false, 00:14:12.662 "zone_management": false, 00:14:12.662 "zone_append": false, 00:14:12.662 "compare": false, 00:14:12.662 "compare_and_write": false, 00:14:12.662 "abort": true, 00:14:12.662 "seek_hole": false, 00:14:12.662 "seek_data": false, 00:14:12.662 "copy": true, 00:14:12.662 "nvme_iov_md": false 00:14:12.662 }, 00:14:12.662 "memory_domains": [ 00:14:12.662 { 00:14:12.662 "dma_device_id": "system", 00:14:12.662 "dma_device_type": 1 00:14:12.662 }, 00:14:12.662 { 00:14:12.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.662 "dma_device_type": 2 00:14:12.662 } 00:14:12.662 ], 00:14:12.662 "driver_specific": {} 00:14:12.662 } 00:14:12.662 ] 00:14:12.662 18:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.662 18:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:12.662 18:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:12.662 18:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:12.662 18:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:12.662 18:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:12.662 18:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:12.662 18:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:12.662 18:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:12.662 18:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:12.662 18:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.662 18:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.663 18:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.663 18:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.663 18:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.663 18:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.663 18:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.663 18:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.663 18:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.663 18:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.663 "name": "Existed_Raid", 00:14:12.663 "uuid": "c1fae6b3-44a6-4d7a-a8a3-d9be42573f19", 00:14:12.663 "strip_size_kb": 64, 00:14:12.663 "state": "configuring", 00:14:12.663 "raid_level": "concat", 00:14:12.663 "superblock": true, 00:14:12.663 "num_base_bdevs": 4, 00:14:12.663 "num_base_bdevs_discovered": 2, 00:14:12.663 "num_base_bdevs_operational": 4, 00:14:12.663 "base_bdevs_list": [ 00:14:12.663 { 00:14:12.663 "name": "BaseBdev1", 00:14:12.663 "uuid": "153cf32f-3dc5-4c16-8427-206ec715b00b", 00:14:12.663 "is_configured": true, 00:14:12.663 "data_offset": 2048, 00:14:12.663 "data_size": 63488 00:14:12.663 }, 00:14:12.663 { 00:14:12.663 "name": "BaseBdev2", 00:14:12.663 "uuid": "805a807f-6f5a-4e3d-b0aa-f73718f412bf", 00:14:12.663 "is_configured": true, 00:14:12.663 "data_offset": 2048, 00:14:12.663 "data_size": 63488 00:14:12.663 }, 00:14:12.663 { 00:14:12.663 "name": "BaseBdev3", 00:14:12.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.663 "is_configured": false, 00:14:12.663 "data_offset": 0, 00:14:12.663 "data_size": 0 00:14:12.663 }, 00:14:12.663 { 00:14:12.663 "name": "BaseBdev4", 00:14:12.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.663 "is_configured": false, 00:14:12.663 "data_offset": 0, 00:14:12.663 "data_size": 0 00:14:12.663 } 00:14:12.663 ] 00:14:12.663 }' 00:14:12.663 18:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.663 18:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.231 18:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:13.231 18:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.231 18:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.231 [2024-12-06 18:12:38.703889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:13.231 BaseBdev3 00:14:13.231 18:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.231 18:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:13.231 18:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:13.231 18:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:13.231 18:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:13.231 18:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:13.231 18:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:13.231 18:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:13.231 18:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.231 18:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.231 18:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.231 18:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:13.231 18:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.231 18:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.231 [ 00:14:13.231 { 00:14:13.231 "name": "BaseBdev3", 00:14:13.231 "aliases": [ 00:14:13.231 "7811fa3c-ee14-4cd3-bcad-a3e62e77056c" 00:14:13.231 ], 00:14:13.231 "product_name": "Malloc disk", 00:14:13.231 "block_size": 512, 00:14:13.231 "num_blocks": 65536, 00:14:13.231 "uuid": "7811fa3c-ee14-4cd3-bcad-a3e62e77056c", 00:14:13.231 "assigned_rate_limits": { 00:14:13.231 "rw_ios_per_sec": 0, 00:14:13.231 "rw_mbytes_per_sec": 0, 00:14:13.231 "r_mbytes_per_sec": 0, 00:14:13.231 "w_mbytes_per_sec": 0 00:14:13.231 }, 00:14:13.231 "claimed": true, 00:14:13.231 "claim_type": "exclusive_write", 00:14:13.231 "zoned": false, 00:14:13.231 "supported_io_types": { 00:14:13.231 "read": true, 00:14:13.231 "write": true, 00:14:13.231 "unmap": true, 00:14:13.231 "flush": true, 00:14:13.231 "reset": true, 00:14:13.231 "nvme_admin": false, 00:14:13.231 "nvme_io": false, 00:14:13.231 "nvme_io_md": false, 00:14:13.231 "write_zeroes": true, 00:14:13.231 "zcopy": true, 00:14:13.231 "get_zone_info": false, 00:14:13.231 "zone_management": false, 00:14:13.231 "zone_append": false, 00:14:13.231 "compare": false, 00:14:13.231 "compare_and_write": false, 00:14:13.231 "abort": true, 00:14:13.231 "seek_hole": false, 00:14:13.231 "seek_data": false, 00:14:13.231 "copy": true, 00:14:13.231 "nvme_iov_md": false 00:14:13.231 }, 00:14:13.231 "memory_domains": [ 00:14:13.231 { 00:14:13.231 "dma_device_id": "system", 00:14:13.231 "dma_device_type": 1 00:14:13.231 }, 00:14:13.231 { 00:14:13.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:13.231 "dma_device_type": 2 00:14:13.231 } 00:14:13.231 ], 00:14:13.231 "driver_specific": {} 00:14:13.231 } 00:14:13.231 ] 00:14:13.231 18:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.231 18:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:13.231 18:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:13.231 18:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:13.231 18:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:13.231 18:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:13.231 18:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:13.231 18:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:13.231 18:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:13.231 18:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:13.231 18:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.231 18:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.231 18:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.231 18:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.231 18:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.231 18:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.231 18:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.231 18:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.492 18:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.492 18:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.492 "name": "Existed_Raid", 00:14:13.492 "uuid": "c1fae6b3-44a6-4d7a-a8a3-d9be42573f19", 00:14:13.492 "strip_size_kb": 64, 00:14:13.492 "state": "configuring", 00:14:13.492 "raid_level": "concat", 00:14:13.492 "superblock": true, 00:14:13.492 "num_base_bdevs": 4, 00:14:13.492 "num_base_bdevs_discovered": 3, 00:14:13.492 "num_base_bdevs_operational": 4, 00:14:13.492 "base_bdevs_list": [ 00:14:13.492 { 00:14:13.492 "name": "BaseBdev1", 00:14:13.492 "uuid": "153cf32f-3dc5-4c16-8427-206ec715b00b", 00:14:13.492 "is_configured": true, 00:14:13.492 "data_offset": 2048, 00:14:13.492 "data_size": 63488 00:14:13.492 }, 00:14:13.492 { 00:14:13.492 "name": "BaseBdev2", 00:14:13.492 "uuid": "805a807f-6f5a-4e3d-b0aa-f73718f412bf", 00:14:13.492 "is_configured": true, 00:14:13.492 "data_offset": 2048, 00:14:13.492 "data_size": 63488 00:14:13.492 }, 00:14:13.492 { 00:14:13.492 "name": "BaseBdev3", 00:14:13.492 "uuid": "7811fa3c-ee14-4cd3-bcad-a3e62e77056c", 00:14:13.492 "is_configured": true, 00:14:13.492 "data_offset": 2048, 00:14:13.492 "data_size": 63488 00:14:13.492 }, 00:14:13.492 { 00:14:13.492 "name": "BaseBdev4", 00:14:13.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.492 "is_configured": false, 00:14:13.492 "data_offset": 0, 00:14:13.492 "data_size": 0 00:14:13.492 } 00:14:13.492 ] 00:14:13.492 }' 00:14:13.492 18:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.492 18:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.061 18:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:14.061 18:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.061 18:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.061 [2024-12-06 18:12:39.317218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:14.061 [2024-12-06 18:12:39.317547] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:14.061 [2024-12-06 18:12:39.317566] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:14.061 [2024-12-06 18:12:39.317951] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:14.061 BaseBdev4 00:14:14.061 [2024-12-06 18:12:39.318139] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:14.061 [2024-12-06 18:12:39.318160] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:14.061 [2024-12-06 18:12:39.318330] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:14.061 18:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.061 18:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:14.061 18:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:14.061 18:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:14.061 18:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:14.061 18:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:14.061 18:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:14.061 18:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:14.061 18:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.061 18:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.061 18:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.061 18:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:14.061 18:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.061 18:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.061 [ 00:14:14.061 { 00:14:14.061 "name": "BaseBdev4", 00:14:14.061 "aliases": [ 00:14:14.061 "7080ba30-1fda-4ac0-af04-68861fe1f7fe" 00:14:14.061 ], 00:14:14.061 "product_name": "Malloc disk", 00:14:14.061 "block_size": 512, 00:14:14.061 "num_blocks": 65536, 00:14:14.061 "uuid": "7080ba30-1fda-4ac0-af04-68861fe1f7fe", 00:14:14.061 "assigned_rate_limits": { 00:14:14.061 "rw_ios_per_sec": 0, 00:14:14.061 "rw_mbytes_per_sec": 0, 00:14:14.061 "r_mbytes_per_sec": 0, 00:14:14.061 "w_mbytes_per_sec": 0 00:14:14.061 }, 00:14:14.061 "claimed": true, 00:14:14.061 "claim_type": "exclusive_write", 00:14:14.061 "zoned": false, 00:14:14.061 "supported_io_types": { 00:14:14.061 "read": true, 00:14:14.061 "write": true, 00:14:14.061 "unmap": true, 00:14:14.061 "flush": true, 00:14:14.061 "reset": true, 00:14:14.061 "nvme_admin": false, 00:14:14.061 "nvme_io": false, 00:14:14.061 "nvme_io_md": false, 00:14:14.061 "write_zeroes": true, 00:14:14.061 "zcopy": true, 00:14:14.061 "get_zone_info": false, 00:14:14.061 "zone_management": false, 00:14:14.061 "zone_append": false, 00:14:14.061 "compare": false, 00:14:14.061 "compare_and_write": false, 00:14:14.061 "abort": true, 00:14:14.061 "seek_hole": false, 00:14:14.061 "seek_data": false, 00:14:14.061 "copy": true, 00:14:14.061 "nvme_iov_md": false 00:14:14.061 }, 00:14:14.061 "memory_domains": [ 00:14:14.061 { 00:14:14.061 "dma_device_id": "system", 00:14:14.061 "dma_device_type": 1 00:14:14.061 }, 00:14:14.061 { 00:14:14.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.061 "dma_device_type": 2 00:14:14.061 } 00:14:14.061 ], 00:14:14.061 "driver_specific": {} 00:14:14.061 } 00:14:14.061 ] 00:14:14.061 18:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.061 18:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:14.061 18:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:14.061 18:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:14.061 18:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:14.061 18:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:14.061 18:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.061 18:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:14.061 18:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:14.061 18:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:14.061 18:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.061 18:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.061 18:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.062 18:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.062 18:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.062 18:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.062 18:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.062 18:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.062 18:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.062 18:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.062 "name": "Existed_Raid", 00:14:14.062 "uuid": "c1fae6b3-44a6-4d7a-a8a3-d9be42573f19", 00:14:14.062 "strip_size_kb": 64, 00:14:14.062 "state": "online", 00:14:14.062 "raid_level": "concat", 00:14:14.062 "superblock": true, 00:14:14.062 "num_base_bdevs": 4, 00:14:14.062 "num_base_bdevs_discovered": 4, 00:14:14.062 "num_base_bdevs_operational": 4, 00:14:14.062 "base_bdevs_list": [ 00:14:14.062 { 00:14:14.062 "name": "BaseBdev1", 00:14:14.062 "uuid": "153cf32f-3dc5-4c16-8427-206ec715b00b", 00:14:14.062 "is_configured": true, 00:14:14.062 "data_offset": 2048, 00:14:14.062 "data_size": 63488 00:14:14.062 }, 00:14:14.062 { 00:14:14.062 "name": "BaseBdev2", 00:14:14.062 "uuid": "805a807f-6f5a-4e3d-b0aa-f73718f412bf", 00:14:14.062 "is_configured": true, 00:14:14.062 "data_offset": 2048, 00:14:14.062 "data_size": 63488 00:14:14.062 }, 00:14:14.062 { 00:14:14.062 "name": "BaseBdev3", 00:14:14.062 "uuid": "7811fa3c-ee14-4cd3-bcad-a3e62e77056c", 00:14:14.062 "is_configured": true, 00:14:14.062 "data_offset": 2048, 00:14:14.062 "data_size": 63488 00:14:14.062 }, 00:14:14.062 { 00:14:14.062 "name": "BaseBdev4", 00:14:14.062 "uuid": "7080ba30-1fda-4ac0-af04-68861fe1f7fe", 00:14:14.062 "is_configured": true, 00:14:14.062 "data_offset": 2048, 00:14:14.062 "data_size": 63488 00:14:14.062 } 00:14:14.062 ] 00:14:14.062 }' 00:14:14.062 18:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.062 18:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.629 18:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:14.629 18:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:14.629 18:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:14.629 18:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:14.629 18:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:14.630 18:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:14.630 18:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:14.630 18:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.630 18:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:14.630 18:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.630 [2024-12-06 18:12:39.865915] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:14.630 18:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.630 18:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:14.630 "name": "Existed_Raid", 00:14:14.630 "aliases": [ 00:14:14.630 "c1fae6b3-44a6-4d7a-a8a3-d9be42573f19" 00:14:14.630 ], 00:14:14.630 "product_name": "Raid Volume", 00:14:14.630 "block_size": 512, 00:14:14.630 "num_blocks": 253952, 00:14:14.630 "uuid": "c1fae6b3-44a6-4d7a-a8a3-d9be42573f19", 00:14:14.630 "assigned_rate_limits": { 00:14:14.630 "rw_ios_per_sec": 0, 00:14:14.630 "rw_mbytes_per_sec": 0, 00:14:14.630 "r_mbytes_per_sec": 0, 00:14:14.630 "w_mbytes_per_sec": 0 00:14:14.630 }, 00:14:14.630 "claimed": false, 00:14:14.630 "zoned": false, 00:14:14.630 "supported_io_types": { 00:14:14.630 "read": true, 00:14:14.630 "write": true, 00:14:14.630 "unmap": true, 00:14:14.630 "flush": true, 00:14:14.630 "reset": true, 00:14:14.630 "nvme_admin": false, 00:14:14.630 "nvme_io": false, 00:14:14.630 "nvme_io_md": false, 00:14:14.630 "write_zeroes": true, 00:14:14.630 "zcopy": false, 00:14:14.630 "get_zone_info": false, 00:14:14.630 "zone_management": false, 00:14:14.630 "zone_append": false, 00:14:14.630 "compare": false, 00:14:14.630 "compare_and_write": false, 00:14:14.630 "abort": false, 00:14:14.630 "seek_hole": false, 00:14:14.630 "seek_data": false, 00:14:14.630 "copy": false, 00:14:14.630 "nvme_iov_md": false 00:14:14.630 }, 00:14:14.630 "memory_domains": [ 00:14:14.630 { 00:14:14.630 "dma_device_id": "system", 00:14:14.630 "dma_device_type": 1 00:14:14.630 }, 00:14:14.630 { 00:14:14.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.630 "dma_device_type": 2 00:14:14.630 }, 00:14:14.630 { 00:14:14.630 "dma_device_id": "system", 00:14:14.630 "dma_device_type": 1 00:14:14.630 }, 00:14:14.630 { 00:14:14.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.630 "dma_device_type": 2 00:14:14.630 }, 00:14:14.630 { 00:14:14.630 "dma_device_id": "system", 00:14:14.630 "dma_device_type": 1 00:14:14.630 }, 00:14:14.630 { 00:14:14.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.630 "dma_device_type": 2 00:14:14.630 }, 00:14:14.630 { 00:14:14.630 "dma_device_id": "system", 00:14:14.630 "dma_device_type": 1 00:14:14.630 }, 00:14:14.630 { 00:14:14.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.630 "dma_device_type": 2 00:14:14.630 } 00:14:14.630 ], 00:14:14.630 "driver_specific": { 00:14:14.630 "raid": { 00:14:14.630 "uuid": "c1fae6b3-44a6-4d7a-a8a3-d9be42573f19", 00:14:14.630 "strip_size_kb": 64, 00:14:14.630 "state": "online", 00:14:14.630 "raid_level": "concat", 00:14:14.630 "superblock": true, 00:14:14.630 "num_base_bdevs": 4, 00:14:14.630 "num_base_bdevs_discovered": 4, 00:14:14.630 "num_base_bdevs_operational": 4, 00:14:14.630 "base_bdevs_list": [ 00:14:14.630 { 00:14:14.630 "name": "BaseBdev1", 00:14:14.630 "uuid": "153cf32f-3dc5-4c16-8427-206ec715b00b", 00:14:14.630 "is_configured": true, 00:14:14.630 "data_offset": 2048, 00:14:14.630 "data_size": 63488 00:14:14.630 }, 00:14:14.630 { 00:14:14.630 "name": "BaseBdev2", 00:14:14.630 "uuid": "805a807f-6f5a-4e3d-b0aa-f73718f412bf", 00:14:14.630 "is_configured": true, 00:14:14.630 "data_offset": 2048, 00:14:14.630 "data_size": 63488 00:14:14.630 }, 00:14:14.630 { 00:14:14.630 "name": "BaseBdev3", 00:14:14.630 "uuid": "7811fa3c-ee14-4cd3-bcad-a3e62e77056c", 00:14:14.630 "is_configured": true, 00:14:14.630 "data_offset": 2048, 00:14:14.630 "data_size": 63488 00:14:14.630 }, 00:14:14.630 { 00:14:14.630 "name": "BaseBdev4", 00:14:14.630 "uuid": "7080ba30-1fda-4ac0-af04-68861fe1f7fe", 00:14:14.630 "is_configured": true, 00:14:14.630 "data_offset": 2048, 00:14:14.630 "data_size": 63488 00:14:14.630 } 00:14:14.630 ] 00:14:14.630 } 00:14:14.630 } 00:14:14.630 }' 00:14:14.630 18:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:14.630 18:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:14.630 BaseBdev2 00:14:14.630 BaseBdev3 00:14:14.630 BaseBdev4' 00:14:14.630 18:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:14.630 18:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:14.630 18:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:14.630 18:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:14.630 18:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:14.630 18:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.630 18:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.630 18:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.630 18:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:14.630 18:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:14.630 18:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:14.630 18:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:14.630 18:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.630 18:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.630 18:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:14.630 18:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.890 18:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:14.890 18:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:14.890 18:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:14.890 18:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:14.890 18:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:14.890 18:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.890 18:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.890 18:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.890 18:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:14.890 18:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:14.890 18:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:14.890 18:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:14.890 18:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.890 18:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.890 18:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:14.890 18:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.890 18:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:14.890 18:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:14.890 18:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:14.890 18:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.890 18:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.890 [2024-12-06 18:12:40.261708] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:14.890 [2024-12-06 18:12:40.261746] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:14.890 [2024-12-06 18:12:40.261843] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:14.890 18:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.890 18:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:14.890 18:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:14:14.890 18:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:14.890 18:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:14:14.890 18:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:14:14.890 18:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:14:14.890 18:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:14.890 18:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:14:14.890 18:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:14.890 18:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:14.890 18:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:14.890 18:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.890 18:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.890 18:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.890 18:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.890 18:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.890 18:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.890 18:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.890 18:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.890 18:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.890 18:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.890 "name": "Existed_Raid", 00:14:14.890 "uuid": "c1fae6b3-44a6-4d7a-a8a3-d9be42573f19", 00:14:14.890 "strip_size_kb": 64, 00:14:14.890 "state": "offline", 00:14:14.890 "raid_level": "concat", 00:14:14.890 "superblock": true, 00:14:14.890 "num_base_bdevs": 4, 00:14:14.890 "num_base_bdevs_discovered": 3, 00:14:14.890 "num_base_bdevs_operational": 3, 00:14:14.890 "base_bdevs_list": [ 00:14:14.890 { 00:14:14.890 "name": null, 00:14:14.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.890 "is_configured": false, 00:14:14.890 "data_offset": 0, 00:14:14.890 "data_size": 63488 00:14:14.890 }, 00:14:14.890 { 00:14:14.890 "name": "BaseBdev2", 00:14:14.890 "uuid": "805a807f-6f5a-4e3d-b0aa-f73718f412bf", 00:14:14.890 "is_configured": true, 00:14:14.890 "data_offset": 2048, 00:14:14.890 "data_size": 63488 00:14:14.890 }, 00:14:14.890 { 00:14:14.890 "name": "BaseBdev3", 00:14:14.890 "uuid": "7811fa3c-ee14-4cd3-bcad-a3e62e77056c", 00:14:14.890 "is_configured": true, 00:14:14.890 "data_offset": 2048, 00:14:14.890 "data_size": 63488 00:14:14.890 }, 00:14:14.890 { 00:14:14.890 "name": "BaseBdev4", 00:14:14.891 "uuid": "7080ba30-1fda-4ac0-af04-68861fe1f7fe", 00:14:14.891 "is_configured": true, 00:14:14.891 "data_offset": 2048, 00:14:14.891 "data_size": 63488 00:14:14.891 } 00:14:14.891 ] 00:14:14.891 }' 00:14:14.891 18:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.891 18:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.463 18:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:15.463 18:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:15.463 18:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:15.463 18:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.463 18:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.463 18:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.463 18:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.463 18:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:15.463 18:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:15.463 18:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:15.463 18:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.463 18:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.463 [2024-12-06 18:12:40.948854] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:15.722 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.722 18:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:15.722 18:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:15.722 18:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.722 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.722 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.722 18:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:15.722 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.722 18:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:15.722 18:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:15.722 18:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:15.722 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.722 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.722 [2024-12-06 18:12:41.121804] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:15.722 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.722 18:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:15.722 18:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:15.722 18:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.722 18:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:15.722 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.722 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.722 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.981 18:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:15.981 18:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:15.981 18:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:15.981 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.981 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.981 [2024-12-06 18:12:41.273252] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:15.981 [2024-12-06 18:12:41.273314] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:15.981 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.981 18:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:15.981 18:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:15.981 18:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.981 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.981 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.981 18:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:15.981 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.981 18:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:15.981 18:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:15.981 18:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:15.981 18:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:15.981 18:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:15.981 18:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:15.981 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.981 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.981 BaseBdev2 00:14:15.981 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.981 18:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:15.981 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:15.981 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:15.981 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:15.981 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:15.981 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:15.981 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:15.981 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.981 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.981 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.981 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:15.981 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.981 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.981 [ 00:14:15.981 { 00:14:15.981 "name": "BaseBdev2", 00:14:15.981 "aliases": [ 00:14:15.981 "eb22bfbb-fcb9-4ce8-a9d0-50bb96ed1fca" 00:14:15.981 ], 00:14:15.981 "product_name": "Malloc disk", 00:14:15.981 "block_size": 512, 00:14:15.981 "num_blocks": 65536, 00:14:15.981 "uuid": "eb22bfbb-fcb9-4ce8-a9d0-50bb96ed1fca", 00:14:15.981 "assigned_rate_limits": { 00:14:15.981 "rw_ios_per_sec": 0, 00:14:15.981 "rw_mbytes_per_sec": 0, 00:14:15.981 "r_mbytes_per_sec": 0, 00:14:15.981 "w_mbytes_per_sec": 0 00:14:15.981 }, 00:14:15.981 "claimed": false, 00:14:15.981 "zoned": false, 00:14:15.981 "supported_io_types": { 00:14:15.981 "read": true, 00:14:15.981 "write": true, 00:14:15.981 "unmap": true, 00:14:15.981 "flush": true, 00:14:15.981 "reset": true, 00:14:15.981 "nvme_admin": false, 00:14:15.981 "nvme_io": false, 00:14:15.981 "nvme_io_md": false, 00:14:15.981 "write_zeroes": true, 00:14:15.981 "zcopy": true, 00:14:15.981 "get_zone_info": false, 00:14:15.981 "zone_management": false, 00:14:15.981 "zone_append": false, 00:14:15.981 "compare": false, 00:14:15.981 "compare_and_write": false, 00:14:15.981 "abort": true, 00:14:15.981 "seek_hole": false, 00:14:15.981 "seek_data": false, 00:14:15.981 "copy": true, 00:14:15.981 "nvme_iov_md": false 00:14:15.981 }, 00:14:15.981 "memory_domains": [ 00:14:16.239 { 00:14:16.239 "dma_device_id": "system", 00:14:16.239 "dma_device_type": 1 00:14:16.239 }, 00:14:16.239 { 00:14:16.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.239 "dma_device_type": 2 00:14:16.239 } 00:14:16.239 ], 00:14:16.239 "driver_specific": {} 00:14:16.240 } 00:14:16.240 ] 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.240 BaseBdev3 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.240 [ 00:14:16.240 { 00:14:16.240 "name": "BaseBdev3", 00:14:16.240 "aliases": [ 00:14:16.240 "f88a21ff-6691-4d5f-aeee-914a2b7889da" 00:14:16.240 ], 00:14:16.240 "product_name": "Malloc disk", 00:14:16.240 "block_size": 512, 00:14:16.240 "num_blocks": 65536, 00:14:16.240 "uuid": "f88a21ff-6691-4d5f-aeee-914a2b7889da", 00:14:16.240 "assigned_rate_limits": { 00:14:16.240 "rw_ios_per_sec": 0, 00:14:16.240 "rw_mbytes_per_sec": 0, 00:14:16.240 "r_mbytes_per_sec": 0, 00:14:16.240 "w_mbytes_per_sec": 0 00:14:16.240 }, 00:14:16.240 "claimed": false, 00:14:16.240 "zoned": false, 00:14:16.240 "supported_io_types": { 00:14:16.240 "read": true, 00:14:16.240 "write": true, 00:14:16.240 "unmap": true, 00:14:16.240 "flush": true, 00:14:16.240 "reset": true, 00:14:16.240 "nvme_admin": false, 00:14:16.240 "nvme_io": false, 00:14:16.240 "nvme_io_md": false, 00:14:16.240 "write_zeroes": true, 00:14:16.240 "zcopy": true, 00:14:16.240 "get_zone_info": false, 00:14:16.240 "zone_management": false, 00:14:16.240 "zone_append": false, 00:14:16.240 "compare": false, 00:14:16.240 "compare_and_write": false, 00:14:16.240 "abort": true, 00:14:16.240 "seek_hole": false, 00:14:16.240 "seek_data": false, 00:14:16.240 "copy": true, 00:14:16.240 "nvme_iov_md": false 00:14:16.240 }, 00:14:16.240 "memory_domains": [ 00:14:16.240 { 00:14:16.240 "dma_device_id": "system", 00:14:16.240 "dma_device_type": 1 00:14:16.240 }, 00:14:16.240 { 00:14:16.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.240 "dma_device_type": 2 00:14:16.240 } 00:14:16.240 ], 00:14:16.240 "driver_specific": {} 00:14:16.240 } 00:14:16.240 ] 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.240 BaseBdev4 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.240 [ 00:14:16.240 { 00:14:16.240 "name": "BaseBdev4", 00:14:16.240 "aliases": [ 00:14:16.240 "ecb2cd8e-fd8e-4a9d-a99c-1abc5e7f6e76" 00:14:16.240 ], 00:14:16.240 "product_name": "Malloc disk", 00:14:16.240 "block_size": 512, 00:14:16.240 "num_blocks": 65536, 00:14:16.240 "uuid": "ecb2cd8e-fd8e-4a9d-a99c-1abc5e7f6e76", 00:14:16.240 "assigned_rate_limits": { 00:14:16.240 "rw_ios_per_sec": 0, 00:14:16.240 "rw_mbytes_per_sec": 0, 00:14:16.240 "r_mbytes_per_sec": 0, 00:14:16.240 "w_mbytes_per_sec": 0 00:14:16.240 }, 00:14:16.240 "claimed": false, 00:14:16.240 "zoned": false, 00:14:16.240 "supported_io_types": { 00:14:16.240 "read": true, 00:14:16.240 "write": true, 00:14:16.240 "unmap": true, 00:14:16.240 "flush": true, 00:14:16.240 "reset": true, 00:14:16.240 "nvme_admin": false, 00:14:16.240 "nvme_io": false, 00:14:16.240 "nvme_io_md": false, 00:14:16.240 "write_zeroes": true, 00:14:16.240 "zcopy": true, 00:14:16.240 "get_zone_info": false, 00:14:16.240 "zone_management": false, 00:14:16.240 "zone_append": false, 00:14:16.240 "compare": false, 00:14:16.240 "compare_and_write": false, 00:14:16.240 "abort": true, 00:14:16.240 "seek_hole": false, 00:14:16.240 "seek_data": false, 00:14:16.240 "copy": true, 00:14:16.240 "nvme_iov_md": false 00:14:16.240 }, 00:14:16.240 "memory_domains": [ 00:14:16.240 { 00:14:16.240 "dma_device_id": "system", 00:14:16.240 "dma_device_type": 1 00:14:16.240 }, 00:14:16.240 { 00:14:16.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.240 "dma_device_type": 2 00:14:16.240 } 00:14:16.240 ], 00:14:16.240 "driver_specific": {} 00:14:16.240 } 00:14:16.240 ] 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.240 [2024-12-06 18:12:41.668208] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:16.240 [2024-12-06 18:12:41.668583] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:16.240 [2024-12-06 18:12:41.668632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:16.240 [2024-12-06 18:12:41.671063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:16.240 [2024-12-06 18:12:41.671134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.240 18:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.241 18:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.241 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.241 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.241 18:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.241 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.241 18:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.241 "name": "Existed_Raid", 00:14:16.241 "uuid": "4725eeb5-c12c-427c-94d5-2fa936d7f260", 00:14:16.241 "strip_size_kb": 64, 00:14:16.241 "state": "configuring", 00:14:16.241 "raid_level": "concat", 00:14:16.241 "superblock": true, 00:14:16.241 "num_base_bdevs": 4, 00:14:16.241 "num_base_bdevs_discovered": 3, 00:14:16.241 "num_base_bdevs_operational": 4, 00:14:16.241 "base_bdevs_list": [ 00:14:16.241 { 00:14:16.241 "name": "BaseBdev1", 00:14:16.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.241 "is_configured": false, 00:14:16.241 "data_offset": 0, 00:14:16.241 "data_size": 0 00:14:16.241 }, 00:14:16.241 { 00:14:16.241 "name": "BaseBdev2", 00:14:16.241 "uuid": "eb22bfbb-fcb9-4ce8-a9d0-50bb96ed1fca", 00:14:16.241 "is_configured": true, 00:14:16.241 "data_offset": 2048, 00:14:16.241 "data_size": 63488 00:14:16.241 }, 00:14:16.241 { 00:14:16.241 "name": "BaseBdev3", 00:14:16.241 "uuid": "f88a21ff-6691-4d5f-aeee-914a2b7889da", 00:14:16.241 "is_configured": true, 00:14:16.241 "data_offset": 2048, 00:14:16.241 "data_size": 63488 00:14:16.241 }, 00:14:16.241 { 00:14:16.241 "name": "BaseBdev4", 00:14:16.241 "uuid": "ecb2cd8e-fd8e-4a9d-a99c-1abc5e7f6e76", 00:14:16.241 "is_configured": true, 00:14:16.241 "data_offset": 2048, 00:14:16.241 "data_size": 63488 00:14:16.241 } 00:14:16.241 ] 00:14:16.241 }' 00:14:16.241 18:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.241 18:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.808 18:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:16.808 18:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.808 18:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.808 [2024-12-06 18:12:42.204405] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:16.808 18:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.808 18:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:16.808 18:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:16.808 18:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:16.808 18:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:16.808 18:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:16.808 18:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:16.808 18:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.808 18:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.808 18:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.808 18:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.808 18:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.808 18:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.808 18:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.808 18:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.808 18:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.808 18:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.808 "name": "Existed_Raid", 00:14:16.808 "uuid": "4725eeb5-c12c-427c-94d5-2fa936d7f260", 00:14:16.808 "strip_size_kb": 64, 00:14:16.808 "state": "configuring", 00:14:16.808 "raid_level": "concat", 00:14:16.808 "superblock": true, 00:14:16.808 "num_base_bdevs": 4, 00:14:16.808 "num_base_bdevs_discovered": 2, 00:14:16.808 "num_base_bdevs_operational": 4, 00:14:16.808 "base_bdevs_list": [ 00:14:16.808 { 00:14:16.808 "name": "BaseBdev1", 00:14:16.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.808 "is_configured": false, 00:14:16.808 "data_offset": 0, 00:14:16.809 "data_size": 0 00:14:16.809 }, 00:14:16.809 { 00:14:16.809 "name": null, 00:14:16.809 "uuid": "eb22bfbb-fcb9-4ce8-a9d0-50bb96ed1fca", 00:14:16.809 "is_configured": false, 00:14:16.809 "data_offset": 0, 00:14:16.809 "data_size": 63488 00:14:16.809 }, 00:14:16.809 { 00:14:16.809 "name": "BaseBdev3", 00:14:16.809 "uuid": "f88a21ff-6691-4d5f-aeee-914a2b7889da", 00:14:16.809 "is_configured": true, 00:14:16.809 "data_offset": 2048, 00:14:16.809 "data_size": 63488 00:14:16.809 }, 00:14:16.809 { 00:14:16.809 "name": "BaseBdev4", 00:14:16.809 "uuid": "ecb2cd8e-fd8e-4a9d-a99c-1abc5e7f6e76", 00:14:16.809 "is_configured": true, 00:14:16.809 "data_offset": 2048, 00:14:16.809 "data_size": 63488 00:14:16.809 } 00:14:16.809 ] 00:14:16.809 }' 00:14:16.809 18:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.809 18:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.377 18:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.377 18:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.377 18:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.377 18:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:17.377 18:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.377 18:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:17.377 18:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:17.378 18:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.378 18:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.378 [2024-12-06 18:12:42.824479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:17.378 BaseBdev1 00:14:17.378 18:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.378 18:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:17.378 18:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:17.378 18:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:17.378 18:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:17.378 18:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:17.378 18:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:17.378 18:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:17.378 18:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.378 18:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.378 18:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.378 18:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:17.378 18:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.378 18:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.378 [ 00:14:17.378 { 00:14:17.378 "name": "BaseBdev1", 00:14:17.378 "aliases": [ 00:14:17.378 "711f6056-5d09-451d-8cfd-483b5f2b1fc5" 00:14:17.378 ], 00:14:17.378 "product_name": "Malloc disk", 00:14:17.378 "block_size": 512, 00:14:17.378 "num_blocks": 65536, 00:14:17.378 "uuid": "711f6056-5d09-451d-8cfd-483b5f2b1fc5", 00:14:17.378 "assigned_rate_limits": { 00:14:17.378 "rw_ios_per_sec": 0, 00:14:17.378 "rw_mbytes_per_sec": 0, 00:14:17.378 "r_mbytes_per_sec": 0, 00:14:17.378 "w_mbytes_per_sec": 0 00:14:17.378 }, 00:14:17.378 "claimed": true, 00:14:17.378 "claim_type": "exclusive_write", 00:14:17.378 "zoned": false, 00:14:17.378 "supported_io_types": { 00:14:17.378 "read": true, 00:14:17.378 "write": true, 00:14:17.378 "unmap": true, 00:14:17.378 "flush": true, 00:14:17.378 "reset": true, 00:14:17.378 "nvme_admin": false, 00:14:17.378 "nvme_io": false, 00:14:17.378 "nvme_io_md": false, 00:14:17.378 "write_zeroes": true, 00:14:17.378 "zcopy": true, 00:14:17.378 "get_zone_info": false, 00:14:17.378 "zone_management": false, 00:14:17.378 "zone_append": false, 00:14:17.378 "compare": false, 00:14:17.378 "compare_and_write": false, 00:14:17.378 "abort": true, 00:14:17.378 "seek_hole": false, 00:14:17.378 "seek_data": false, 00:14:17.378 "copy": true, 00:14:17.378 "nvme_iov_md": false 00:14:17.378 }, 00:14:17.378 "memory_domains": [ 00:14:17.378 { 00:14:17.378 "dma_device_id": "system", 00:14:17.378 "dma_device_type": 1 00:14:17.378 }, 00:14:17.378 { 00:14:17.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.378 "dma_device_type": 2 00:14:17.378 } 00:14:17.378 ], 00:14:17.378 "driver_specific": {} 00:14:17.378 } 00:14:17.378 ] 00:14:17.378 18:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.378 18:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:17.378 18:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:17.378 18:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:17.378 18:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:17.378 18:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:17.378 18:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:17.378 18:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:17.378 18:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.378 18:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.378 18:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.378 18:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.378 18:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.378 18:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.378 18:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.378 18:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.378 18:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.637 18:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.637 "name": "Existed_Raid", 00:14:17.637 "uuid": "4725eeb5-c12c-427c-94d5-2fa936d7f260", 00:14:17.637 "strip_size_kb": 64, 00:14:17.637 "state": "configuring", 00:14:17.637 "raid_level": "concat", 00:14:17.637 "superblock": true, 00:14:17.637 "num_base_bdevs": 4, 00:14:17.637 "num_base_bdevs_discovered": 3, 00:14:17.637 "num_base_bdevs_operational": 4, 00:14:17.637 "base_bdevs_list": [ 00:14:17.637 { 00:14:17.637 "name": "BaseBdev1", 00:14:17.637 "uuid": "711f6056-5d09-451d-8cfd-483b5f2b1fc5", 00:14:17.637 "is_configured": true, 00:14:17.637 "data_offset": 2048, 00:14:17.637 "data_size": 63488 00:14:17.637 }, 00:14:17.637 { 00:14:17.637 "name": null, 00:14:17.637 "uuid": "eb22bfbb-fcb9-4ce8-a9d0-50bb96ed1fca", 00:14:17.637 "is_configured": false, 00:14:17.637 "data_offset": 0, 00:14:17.637 "data_size": 63488 00:14:17.637 }, 00:14:17.637 { 00:14:17.637 "name": "BaseBdev3", 00:14:17.637 "uuid": "f88a21ff-6691-4d5f-aeee-914a2b7889da", 00:14:17.637 "is_configured": true, 00:14:17.637 "data_offset": 2048, 00:14:17.637 "data_size": 63488 00:14:17.637 }, 00:14:17.637 { 00:14:17.637 "name": "BaseBdev4", 00:14:17.637 "uuid": "ecb2cd8e-fd8e-4a9d-a99c-1abc5e7f6e76", 00:14:17.637 "is_configured": true, 00:14:17.637 "data_offset": 2048, 00:14:17.637 "data_size": 63488 00:14:17.637 } 00:14:17.637 ] 00:14:17.637 }' 00:14:17.637 18:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.637 18:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.896 18:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.896 18:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:17.896 18:12:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.896 18:12:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.896 18:12:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.155 18:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:18.155 18:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:18.155 18:12:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.155 18:12:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.155 [2024-12-06 18:12:43.436799] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:18.155 18:12:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.155 18:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:18.155 18:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:18.155 18:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:18.155 18:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:18.156 18:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:18.156 18:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:18.156 18:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.156 18:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.156 18:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.156 18:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.156 18:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.156 18:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.156 18:12:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.156 18:12:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.156 18:12:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.156 18:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.156 "name": "Existed_Raid", 00:14:18.156 "uuid": "4725eeb5-c12c-427c-94d5-2fa936d7f260", 00:14:18.156 "strip_size_kb": 64, 00:14:18.156 "state": "configuring", 00:14:18.156 "raid_level": "concat", 00:14:18.156 "superblock": true, 00:14:18.156 "num_base_bdevs": 4, 00:14:18.156 "num_base_bdevs_discovered": 2, 00:14:18.156 "num_base_bdevs_operational": 4, 00:14:18.156 "base_bdevs_list": [ 00:14:18.156 { 00:14:18.156 "name": "BaseBdev1", 00:14:18.156 "uuid": "711f6056-5d09-451d-8cfd-483b5f2b1fc5", 00:14:18.156 "is_configured": true, 00:14:18.156 "data_offset": 2048, 00:14:18.156 "data_size": 63488 00:14:18.156 }, 00:14:18.156 { 00:14:18.156 "name": null, 00:14:18.156 "uuid": "eb22bfbb-fcb9-4ce8-a9d0-50bb96ed1fca", 00:14:18.156 "is_configured": false, 00:14:18.156 "data_offset": 0, 00:14:18.156 "data_size": 63488 00:14:18.156 }, 00:14:18.156 { 00:14:18.156 "name": null, 00:14:18.156 "uuid": "f88a21ff-6691-4d5f-aeee-914a2b7889da", 00:14:18.156 "is_configured": false, 00:14:18.156 "data_offset": 0, 00:14:18.156 "data_size": 63488 00:14:18.156 }, 00:14:18.156 { 00:14:18.156 "name": "BaseBdev4", 00:14:18.156 "uuid": "ecb2cd8e-fd8e-4a9d-a99c-1abc5e7f6e76", 00:14:18.156 "is_configured": true, 00:14:18.156 "data_offset": 2048, 00:14:18.156 "data_size": 63488 00:14:18.156 } 00:14:18.156 ] 00:14:18.156 }' 00:14:18.156 18:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.156 18:12:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.723 18:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.723 18:12:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.723 18:12:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.723 18:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:18.723 18:12:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.723 18:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:18.723 18:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:18.723 18:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.723 18:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.723 [2024-12-06 18:12:44.029113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:18.723 18:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.723 18:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:18.723 18:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:18.723 18:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:18.723 18:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:18.723 18:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:18.723 18:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:18.723 18:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.723 18:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.723 18:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.723 18:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.723 18:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.723 18:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.723 18:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.723 18:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.723 18:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.723 18:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.723 "name": "Existed_Raid", 00:14:18.723 "uuid": "4725eeb5-c12c-427c-94d5-2fa936d7f260", 00:14:18.723 "strip_size_kb": 64, 00:14:18.723 "state": "configuring", 00:14:18.723 "raid_level": "concat", 00:14:18.723 "superblock": true, 00:14:18.723 "num_base_bdevs": 4, 00:14:18.723 "num_base_bdevs_discovered": 3, 00:14:18.723 "num_base_bdevs_operational": 4, 00:14:18.723 "base_bdevs_list": [ 00:14:18.724 { 00:14:18.724 "name": "BaseBdev1", 00:14:18.724 "uuid": "711f6056-5d09-451d-8cfd-483b5f2b1fc5", 00:14:18.724 "is_configured": true, 00:14:18.724 "data_offset": 2048, 00:14:18.724 "data_size": 63488 00:14:18.724 }, 00:14:18.724 { 00:14:18.724 "name": null, 00:14:18.724 "uuid": "eb22bfbb-fcb9-4ce8-a9d0-50bb96ed1fca", 00:14:18.724 "is_configured": false, 00:14:18.724 "data_offset": 0, 00:14:18.724 "data_size": 63488 00:14:18.724 }, 00:14:18.724 { 00:14:18.724 "name": "BaseBdev3", 00:14:18.724 "uuid": "f88a21ff-6691-4d5f-aeee-914a2b7889da", 00:14:18.724 "is_configured": true, 00:14:18.724 "data_offset": 2048, 00:14:18.724 "data_size": 63488 00:14:18.724 }, 00:14:18.724 { 00:14:18.724 "name": "BaseBdev4", 00:14:18.724 "uuid": "ecb2cd8e-fd8e-4a9d-a99c-1abc5e7f6e76", 00:14:18.724 "is_configured": true, 00:14:18.724 "data_offset": 2048, 00:14:18.724 "data_size": 63488 00:14:18.724 } 00:14:18.724 ] 00:14:18.724 }' 00:14:18.724 18:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.724 18:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.292 18:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:19.292 18:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.292 18:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.292 18:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.292 18:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.292 18:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:19.292 18:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:19.292 18:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.292 18:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.292 [2024-12-06 18:12:44.641377] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:19.292 18:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.292 18:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:19.292 18:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:19.292 18:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:19.292 18:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:19.292 18:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:19.292 18:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:19.292 18:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.292 18:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.292 18:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.292 18:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.292 18:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.292 18:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.292 18:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.292 18:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.292 18:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.292 18:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.292 "name": "Existed_Raid", 00:14:19.292 "uuid": "4725eeb5-c12c-427c-94d5-2fa936d7f260", 00:14:19.292 "strip_size_kb": 64, 00:14:19.292 "state": "configuring", 00:14:19.292 "raid_level": "concat", 00:14:19.292 "superblock": true, 00:14:19.292 "num_base_bdevs": 4, 00:14:19.292 "num_base_bdevs_discovered": 2, 00:14:19.292 "num_base_bdevs_operational": 4, 00:14:19.292 "base_bdevs_list": [ 00:14:19.292 { 00:14:19.292 "name": null, 00:14:19.292 "uuid": "711f6056-5d09-451d-8cfd-483b5f2b1fc5", 00:14:19.292 "is_configured": false, 00:14:19.292 "data_offset": 0, 00:14:19.292 "data_size": 63488 00:14:19.292 }, 00:14:19.292 { 00:14:19.292 "name": null, 00:14:19.292 "uuid": "eb22bfbb-fcb9-4ce8-a9d0-50bb96ed1fca", 00:14:19.292 "is_configured": false, 00:14:19.292 "data_offset": 0, 00:14:19.292 "data_size": 63488 00:14:19.292 }, 00:14:19.292 { 00:14:19.292 "name": "BaseBdev3", 00:14:19.292 "uuid": "f88a21ff-6691-4d5f-aeee-914a2b7889da", 00:14:19.292 "is_configured": true, 00:14:19.292 "data_offset": 2048, 00:14:19.292 "data_size": 63488 00:14:19.292 }, 00:14:19.292 { 00:14:19.292 "name": "BaseBdev4", 00:14:19.292 "uuid": "ecb2cd8e-fd8e-4a9d-a99c-1abc5e7f6e76", 00:14:19.292 "is_configured": true, 00:14:19.292 "data_offset": 2048, 00:14:19.292 "data_size": 63488 00:14:19.292 } 00:14:19.292 ] 00:14:19.292 }' 00:14:19.292 18:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.292 18:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.860 18:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.860 18:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.860 18:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.860 18:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:19.860 18:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.860 18:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:19.860 18:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:19.860 18:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.860 18:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.860 [2024-12-06 18:12:45.351606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:19.860 18:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.860 18:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:19.860 18:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:19.860 18:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:19.860 18:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:19.860 18:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:19.860 18:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:19.860 18:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.860 18:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.860 18:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.860 18:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.860 18:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.860 18:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.860 18:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.860 18:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.119 18:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.119 18:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.119 "name": "Existed_Raid", 00:14:20.119 "uuid": "4725eeb5-c12c-427c-94d5-2fa936d7f260", 00:14:20.119 "strip_size_kb": 64, 00:14:20.119 "state": "configuring", 00:14:20.119 "raid_level": "concat", 00:14:20.119 "superblock": true, 00:14:20.119 "num_base_bdevs": 4, 00:14:20.119 "num_base_bdevs_discovered": 3, 00:14:20.119 "num_base_bdevs_operational": 4, 00:14:20.119 "base_bdevs_list": [ 00:14:20.119 { 00:14:20.119 "name": null, 00:14:20.119 "uuid": "711f6056-5d09-451d-8cfd-483b5f2b1fc5", 00:14:20.119 "is_configured": false, 00:14:20.119 "data_offset": 0, 00:14:20.119 "data_size": 63488 00:14:20.119 }, 00:14:20.119 { 00:14:20.119 "name": "BaseBdev2", 00:14:20.119 "uuid": "eb22bfbb-fcb9-4ce8-a9d0-50bb96ed1fca", 00:14:20.119 "is_configured": true, 00:14:20.119 "data_offset": 2048, 00:14:20.119 "data_size": 63488 00:14:20.119 }, 00:14:20.119 { 00:14:20.119 "name": "BaseBdev3", 00:14:20.119 "uuid": "f88a21ff-6691-4d5f-aeee-914a2b7889da", 00:14:20.119 "is_configured": true, 00:14:20.119 "data_offset": 2048, 00:14:20.119 "data_size": 63488 00:14:20.119 }, 00:14:20.119 { 00:14:20.119 "name": "BaseBdev4", 00:14:20.119 "uuid": "ecb2cd8e-fd8e-4a9d-a99c-1abc5e7f6e76", 00:14:20.119 "is_configured": true, 00:14:20.119 "data_offset": 2048, 00:14:20.119 "data_size": 63488 00:14:20.119 } 00:14:20.119 ] 00:14:20.119 }' 00:14:20.119 18:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.119 18:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.377 18:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.377 18:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:20.377 18:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.377 18:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.636 18:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.636 18:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:20.636 18:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.636 18:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:20.636 18:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.636 18:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.636 18:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.636 18:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 711f6056-5d09-451d-8cfd-483b5f2b1fc5 00:14:20.636 18:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.636 18:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.636 [2024-12-06 18:12:46.019442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:20.636 [2024-12-06 18:12:46.020008] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:20.636 [2024-12-06 18:12:46.020033] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:20.636 NewBaseBdev 00:14:20.636 [2024-12-06 18:12:46.020367] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:20.636 [2024-12-06 18:12:46.020537] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:20.636 [2024-12-06 18:12:46.020565] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:20.636 [2024-12-06 18:12:46.020717] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:20.636 18:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.636 18:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:20.636 18:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:20.636 18:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:20.636 18:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:20.636 18:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:20.636 18:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:20.636 18:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:20.636 18:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.636 18:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.636 18:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.636 18:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:20.636 18:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.636 18:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.636 [ 00:14:20.636 { 00:14:20.636 "name": "NewBaseBdev", 00:14:20.636 "aliases": [ 00:14:20.636 "711f6056-5d09-451d-8cfd-483b5f2b1fc5" 00:14:20.636 ], 00:14:20.636 "product_name": "Malloc disk", 00:14:20.636 "block_size": 512, 00:14:20.636 "num_blocks": 65536, 00:14:20.636 "uuid": "711f6056-5d09-451d-8cfd-483b5f2b1fc5", 00:14:20.636 "assigned_rate_limits": { 00:14:20.636 "rw_ios_per_sec": 0, 00:14:20.636 "rw_mbytes_per_sec": 0, 00:14:20.636 "r_mbytes_per_sec": 0, 00:14:20.636 "w_mbytes_per_sec": 0 00:14:20.636 }, 00:14:20.636 "claimed": true, 00:14:20.636 "claim_type": "exclusive_write", 00:14:20.636 "zoned": false, 00:14:20.636 "supported_io_types": { 00:14:20.636 "read": true, 00:14:20.636 "write": true, 00:14:20.636 "unmap": true, 00:14:20.636 "flush": true, 00:14:20.636 "reset": true, 00:14:20.636 "nvme_admin": false, 00:14:20.636 "nvme_io": false, 00:14:20.636 "nvme_io_md": false, 00:14:20.636 "write_zeroes": true, 00:14:20.636 "zcopy": true, 00:14:20.636 "get_zone_info": false, 00:14:20.636 "zone_management": false, 00:14:20.636 "zone_append": false, 00:14:20.636 "compare": false, 00:14:20.636 "compare_and_write": false, 00:14:20.636 "abort": true, 00:14:20.636 "seek_hole": false, 00:14:20.636 "seek_data": false, 00:14:20.636 "copy": true, 00:14:20.636 "nvme_iov_md": false 00:14:20.636 }, 00:14:20.636 "memory_domains": [ 00:14:20.636 { 00:14:20.636 "dma_device_id": "system", 00:14:20.636 "dma_device_type": 1 00:14:20.636 }, 00:14:20.636 { 00:14:20.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.636 "dma_device_type": 2 00:14:20.636 } 00:14:20.636 ], 00:14:20.636 "driver_specific": {} 00:14:20.636 } 00:14:20.636 ] 00:14:20.636 18:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.636 18:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:20.636 18:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:20.636 18:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:20.636 18:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:20.636 18:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:20.636 18:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:20.636 18:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:20.636 18:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.636 18:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.636 18:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.636 18:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.636 18:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.636 18:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.636 18:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.636 18:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.636 18:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.636 18:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.636 "name": "Existed_Raid", 00:14:20.636 "uuid": "4725eeb5-c12c-427c-94d5-2fa936d7f260", 00:14:20.636 "strip_size_kb": 64, 00:14:20.636 "state": "online", 00:14:20.636 "raid_level": "concat", 00:14:20.636 "superblock": true, 00:14:20.636 "num_base_bdevs": 4, 00:14:20.636 "num_base_bdevs_discovered": 4, 00:14:20.636 "num_base_bdevs_operational": 4, 00:14:20.636 "base_bdevs_list": [ 00:14:20.636 { 00:14:20.636 "name": "NewBaseBdev", 00:14:20.636 "uuid": "711f6056-5d09-451d-8cfd-483b5f2b1fc5", 00:14:20.636 "is_configured": true, 00:14:20.636 "data_offset": 2048, 00:14:20.636 "data_size": 63488 00:14:20.636 }, 00:14:20.636 { 00:14:20.636 "name": "BaseBdev2", 00:14:20.636 "uuid": "eb22bfbb-fcb9-4ce8-a9d0-50bb96ed1fca", 00:14:20.636 "is_configured": true, 00:14:20.636 "data_offset": 2048, 00:14:20.636 "data_size": 63488 00:14:20.636 }, 00:14:20.636 { 00:14:20.636 "name": "BaseBdev3", 00:14:20.636 "uuid": "f88a21ff-6691-4d5f-aeee-914a2b7889da", 00:14:20.636 "is_configured": true, 00:14:20.636 "data_offset": 2048, 00:14:20.636 "data_size": 63488 00:14:20.636 }, 00:14:20.636 { 00:14:20.637 "name": "BaseBdev4", 00:14:20.637 "uuid": "ecb2cd8e-fd8e-4a9d-a99c-1abc5e7f6e76", 00:14:20.637 "is_configured": true, 00:14:20.637 "data_offset": 2048, 00:14:20.637 "data_size": 63488 00:14:20.637 } 00:14:20.637 ] 00:14:20.637 }' 00:14:20.637 18:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.637 18:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.203 18:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:21.203 18:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:21.203 18:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:21.203 18:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:21.203 18:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:21.203 18:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:21.203 18:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:21.203 18:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:21.203 18:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.203 18:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.203 [2024-12-06 18:12:46.596191] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:21.203 18:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.203 18:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:21.203 "name": "Existed_Raid", 00:14:21.203 "aliases": [ 00:14:21.203 "4725eeb5-c12c-427c-94d5-2fa936d7f260" 00:14:21.203 ], 00:14:21.203 "product_name": "Raid Volume", 00:14:21.203 "block_size": 512, 00:14:21.203 "num_blocks": 253952, 00:14:21.203 "uuid": "4725eeb5-c12c-427c-94d5-2fa936d7f260", 00:14:21.203 "assigned_rate_limits": { 00:14:21.203 "rw_ios_per_sec": 0, 00:14:21.203 "rw_mbytes_per_sec": 0, 00:14:21.203 "r_mbytes_per_sec": 0, 00:14:21.203 "w_mbytes_per_sec": 0 00:14:21.203 }, 00:14:21.203 "claimed": false, 00:14:21.203 "zoned": false, 00:14:21.203 "supported_io_types": { 00:14:21.203 "read": true, 00:14:21.203 "write": true, 00:14:21.203 "unmap": true, 00:14:21.203 "flush": true, 00:14:21.203 "reset": true, 00:14:21.203 "nvme_admin": false, 00:14:21.203 "nvme_io": false, 00:14:21.203 "nvme_io_md": false, 00:14:21.203 "write_zeroes": true, 00:14:21.203 "zcopy": false, 00:14:21.203 "get_zone_info": false, 00:14:21.203 "zone_management": false, 00:14:21.203 "zone_append": false, 00:14:21.203 "compare": false, 00:14:21.203 "compare_and_write": false, 00:14:21.203 "abort": false, 00:14:21.203 "seek_hole": false, 00:14:21.203 "seek_data": false, 00:14:21.203 "copy": false, 00:14:21.203 "nvme_iov_md": false 00:14:21.203 }, 00:14:21.203 "memory_domains": [ 00:14:21.203 { 00:14:21.203 "dma_device_id": "system", 00:14:21.203 "dma_device_type": 1 00:14:21.203 }, 00:14:21.203 { 00:14:21.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.203 "dma_device_type": 2 00:14:21.203 }, 00:14:21.203 { 00:14:21.203 "dma_device_id": "system", 00:14:21.203 "dma_device_type": 1 00:14:21.203 }, 00:14:21.203 { 00:14:21.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.203 "dma_device_type": 2 00:14:21.203 }, 00:14:21.203 { 00:14:21.203 "dma_device_id": "system", 00:14:21.203 "dma_device_type": 1 00:14:21.203 }, 00:14:21.203 { 00:14:21.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.203 "dma_device_type": 2 00:14:21.203 }, 00:14:21.203 { 00:14:21.203 "dma_device_id": "system", 00:14:21.203 "dma_device_type": 1 00:14:21.203 }, 00:14:21.203 { 00:14:21.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.203 "dma_device_type": 2 00:14:21.203 } 00:14:21.203 ], 00:14:21.203 "driver_specific": { 00:14:21.203 "raid": { 00:14:21.203 "uuid": "4725eeb5-c12c-427c-94d5-2fa936d7f260", 00:14:21.203 "strip_size_kb": 64, 00:14:21.203 "state": "online", 00:14:21.203 "raid_level": "concat", 00:14:21.203 "superblock": true, 00:14:21.203 "num_base_bdevs": 4, 00:14:21.203 "num_base_bdevs_discovered": 4, 00:14:21.203 "num_base_bdevs_operational": 4, 00:14:21.203 "base_bdevs_list": [ 00:14:21.203 { 00:14:21.203 "name": "NewBaseBdev", 00:14:21.203 "uuid": "711f6056-5d09-451d-8cfd-483b5f2b1fc5", 00:14:21.203 "is_configured": true, 00:14:21.203 "data_offset": 2048, 00:14:21.203 "data_size": 63488 00:14:21.203 }, 00:14:21.203 { 00:14:21.203 "name": "BaseBdev2", 00:14:21.203 "uuid": "eb22bfbb-fcb9-4ce8-a9d0-50bb96ed1fca", 00:14:21.203 "is_configured": true, 00:14:21.203 "data_offset": 2048, 00:14:21.203 "data_size": 63488 00:14:21.203 }, 00:14:21.203 { 00:14:21.203 "name": "BaseBdev3", 00:14:21.203 "uuid": "f88a21ff-6691-4d5f-aeee-914a2b7889da", 00:14:21.203 "is_configured": true, 00:14:21.203 "data_offset": 2048, 00:14:21.203 "data_size": 63488 00:14:21.203 }, 00:14:21.203 { 00:14:21.203 "name": "BaseBdev4", 00:14:21.203 "uuid": "ecb2cd8e-fd8e-4a9d-a99c-1abc5e7f6e76", 00:14:21.203 "is_configured": true, 00:14:21.203 "data_offset": 2048, 00:14:21.203 "data_size": 63488 00:14:21.203 } 00:14:21.203 ] 00:14:21.203 } 00:14:21.203 } 00:14:21.203 }' 00:14:21.203 18:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:21.203 18:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:21.203 BaseBdev2 00:14:21.203 BaseBdev3 00:14:21.203 BaseBdev4' 00:14:21.203 18:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.464 18:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:21.464 18:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:21.464 18:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:21.464 18:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.464 18:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.464 18:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.464 18:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.464 18:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:21.464 18:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:21.464 18:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:21.464 18:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.464 18:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:21.464 18:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.464 18:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.464 18:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.464 18:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:21.464 18:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:21.464 18:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:21.464 18:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:21.464 18:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.464 18:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.464 18:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.464 18:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.464 18:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:21.464 18:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:21.464 18:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:21.464 18:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:21.464 18:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.464 18:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.464 18:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.464 18:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.464 18:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:21.464 18:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:21.464 18:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:21.464 18:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.464 18:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.464 [2024-12-06 18:12:46.979826] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:21.464 [2024-12-06 18:12:46.979986] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:21.464 [2024-12-06 18:12:46.980218] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:21.464 [2024-12-06 18:12:46.980324] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:21.464 [2024-12-06 18:12:46.980349] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:21.723 18:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.723 18:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72130 00:14:21.723 18:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 72130 ']' 00:14:21.723 18:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 72130 00:14:21.723 18:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:21.723 18:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:21.723 18:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72130 00:14:21.723 killing process with pid 72130 00:14:21.723 18:12:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:21.723 18:12:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:21.723 18:12:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72130' 00:14:21.723 18:12:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 72130 00:14:21.723 [2024-12-06 18:12:47.015730] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:21.723 18:12:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 72130 00:14:21.983 [2024-12-06 18:12:47.377481] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:22.919 18:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:22.919 00:14:22.919 real 0m13.088s 00:14:22.919 user 0m21.771s 00:14:22.919 sys 0m1.729s 00:14:22.919 ************************************ 00:14:22.919 END TEST raid_state_function_test_sb 00:14:22.919 18:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:22.919 18:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.919 ************************************ 00:14:23.177 18:12:48 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:14:23.177 18:12:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:23.177 18:12:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:23.177 18:12:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:23.177 ************************************ 00:14:23.177 START TEST raid_superblock_test 00:14:23.177 ************************************ 00:14:23.177 18:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:14:23.177 18:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:14:23.177 18:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:14:23.177 18:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:23.177 18:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:23.177 18:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:23.177 18:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:23.177 18:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:23.177 18:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:23.177 18:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:23.177 18:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:23.177 18:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:23.178 18:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:23.178 18:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:23.178 18:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:14:23.178 18:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:23.178 18:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:23.178 18:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72817 00:14:23.178 18:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72817 00:14:23.178 18:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:23.178 18:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72817 ']' 00:14:23.178 18:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.178 18:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:23.178 18:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.178 18:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:23.178 18:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.178 [2024-12-06 18:12:48.595617] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:14:23.178 [2024-12-06 18:12:48.596379] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72817 ] 00:14:23.436 [2024-12-06 18:12:48.780247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.436 [2024-12-06 18:12:48.913125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.694 [2024-12-06 18:12:49.117192] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:23.694 [2024-12-06 18:12:49.117396] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.261 malloc1 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.261 [2024-12-06 18:12:49.634067] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:24.261 [2024-12-06 18:12:49.634281] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:24.261 [2024-12-06 18:12:49.634359] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:24.261 [2024-12-06 18:12:49.634579] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:24.261 [2024-12-06 18:12:49.637384] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:24.261 [2024-12-06 18:12:49.637548] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:24.261 pt1 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.261 malloc2 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.261 [2024-12-06 18:12:49.690006] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:24.261 [2024-12-06 18:12:49.690081] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:24.261 [2024-12-06 18:12:49.690120] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:24.261 [2024-12-06 18:12:49.690135] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:24.261 [2024-12-06 18:12:49.692835] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:24.261 [2024-12-06 18:12:49.692878] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:24.261 pt2 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.261 malloc3 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.261 [2024-12-06 18:12:49.757523] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:24.261 [2024-12-06 18:12:49.757729] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:24.261 [2024-12-06 18:12:49.757794] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:24.261 [2024-12-06 18:12:49.757815] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:24.261 [2024-12-06 18:12:49.760561] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:24.261 [2024-12-06 18:12:49.760608] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:24.261 pt3 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:24.261 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:24.262 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:24.262 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:14:24.262 18:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.262 18:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.520 malloc4 00:14:24.520 18:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.520 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:24.520 18:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.520 18:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.520 [2024-12-06 18:12:49.813048] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:24.520 [2024-12-06 18:12:49.813136] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:24.520 [2024-12-06 18:12:49.813172] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:24.520 [2024-12-06 18:12:49.813188] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:24.520 [2024-12-06 18:12:49.816092] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:24.520 [2024-12-06 18:12:49.816272] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:24.520 pt4 00:14:24.520 18:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.520 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:24.520 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:24.520 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:14:24.520 18:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.520 18:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.520 [2024-12-06 18:12:49.825235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:24.520 [2024-12-06 18:12:49.827682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:24.520 [2024-12-06 18:12:49.827957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:24.520 [2024-12-06 18:12:49.828040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:24.520 [2024-12-06 18:12:49.828297] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:24.520 [2024-12-06 18:12:49.828315] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:24.520 [2024-12-06 18:12:49.828662] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:24.520 [2024-12-06 18:12:49.828922] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:24.520 [2024-12-06 18:12:49.828945] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:24.520 [2024-12-06 18:12:49.829198] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:24.520 18:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.520 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:14:24.520 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:24.520 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:24.520 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:24.520 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:24.520 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:24.520 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.520 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.520 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.520 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.520 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.520 18:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.520 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.520 18:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.520 18:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.520 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.520 "name": "raid_bdev1", 00:14:24.520 "uuid": "5ae4b38f-60e4-48f2-87fd-ba22b390c7f7", 00:14:24.520 "strip_size_kb": 64, 00:14:24.520 "state": "online", 00:14:24.520 "raid_level": "concat", 00:14:24.520 "superblock": true, 00:14:24.520 "num_base_bdevs": 4, 00:14:24.520 "num_base_bdevs_discovered": 4, 00:14:24.520 "num_base_bdevs_operational": 4, 00:14:24.520 "base_bdevs_list": [ 00:14:24.520 { 00:14:24.520 "name": "pt1", 00:14:24.520 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:24.520 "is_configured": true, 00:14:24.520 "data_offset": 2048, 00:14:24.520 "data_size": 63488 00:14:24.520 }, 00:14:24.520 { 00:14:24.520 "name": "pt2", 00:14:24.520 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:24.520 "is_configured": true, 00:14:24.520 "data_offset": 2048, 00:14:24.520 "data_size": 63488 00:14:24.520 }, 00:14:24.520 { 00:14:24.520 "name": "pt3", 00:14:24.520 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:24.520 "is_configured": true, 00:14:24.520 "data_offset": 2048, 00:14:24.520 "data_size": 63488 00:14:24.520 }, 00:14:24.520 { 00:14:24.520 "name": "pt4", 00:14:24.520 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:24.520 "is_configured": true, 00:14:24.520 "data_offset": 2048, 00:14:24.520 "data_size": 63488 00:14:24.520 } 00:14:24.520 ] 00:14:24.520 }' 00:14:24.520 18:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.520 18:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.087 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:25.087 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:25.087 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:25.087 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:25.087 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:25.087 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:25.087 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:25.087 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.087 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.087 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:25.087 [2024-12-06 18:12:50.369857] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:25.088 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.088 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:25.088 "name": "raid_bdev1", 00:14:25.088 "aliases": [ 00:14:25.088 "5ae4b38f-60e4-48f2-87fd-ba22b390c7f7" 00:14:25.088 ], 00:14:25.088 "product_name": "Raid Volume", 00:14:25.088 "block_size": 512, 00:14:25.088 "num_blocks": 253952, 00:14:25.088 "uuid": "5ae4b38f-60e4-48f2-87fd-ba22b390c7f7", 00:14:25.088 "assigned_rate_limits": { 00:14:25.088 "rw_ios_per_sec": 0, 00:14:25.088 "rw_mbytes_per_sec": 0, 00:14:25.088 "r_mbytes_per_sec": 0, 00:14:25.088 "w_mbytes_per_sec": 0 00:14:25.088 }, 00:14:25.088 "claimed": false, 00:14:25.088 "zoned": false, 00:14:25.088 "supported_io_types": { 00:14:25.088 "read": true, 00:14:25.088 "write": true, 00:14:25.088 "unmap": true, 00:14:25.088 "flush": true, 00:14:25.088 "reset": true, 00:14:25.088 "nvme_admin": false, 00:14:25.088 "nvme_io": false, 00:14:25.088 "nvme_io_md": false, 00:14:25.088 "write_zeroes": true, 00:14:25.088 "zcopy": false, 00:14:25.088 "get_zone_info": false, 00:14:25.088 "zone_management": false, 00:14:25.088 "zone_append": false, 00:14:25.088 "compare": false, 00:14:25.088 "compare_and_write": false, 00:14:25.088 "abort": false, 00:14:25.088 "seek_hole": false, 00:14:25.088 "seek_data": false, 00:14:25.088 "copy": false, 00:14:25.088 "nvme_iov_md": false 00:14:25.088 }, 00:14:25.088 "memory_domains": [ 00:14:25.088 { 00:14:25.088 "dma_device_id": "system", 00:14:25.088 "dma_device_type": 1 00:14:25.088 }, 00:14:25.088 { 00:14:25.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:25.088 "dma_device_type": 2 00:14:25.088 }, 00:14:25.088 { 00:14:25.088 "dma_device_id": "system", 00:14:25.088 "dma_device_type": 1 00:14:25.088 }, 00:14:25.088 { 00:14:25.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:25.088 "dma_device_type": 2 00:14:25.088 }, 00:14:25.088 { 00:14:25.088 "dma_device_id": "system", 00:14:25.088 "dma_device_type": 1 00:14:25.088 }, 00:14:25.088 { 00:14:25.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:25.088 "dma_device_type": 2 00:14:25.088 }, 00:14:25.088 { 00:14:25.088 "dma_device_id": "system", 00:14:25.088 "dma_device_type": 1 00:14:25.088 }, 00:14:25.088 { 00:14:25.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:25.088 "dma_device_type": 2 00:14:25.088 } 00:14:25.088 ], 00:14:25.088 "driver_specific": { 00:14:25.088 "raid": { 00:14:25.088 "uuid": "5ae4b38f-60e4-48f2-87fd-ba22b390c7f7", 00:14:25.088 "strip_size_kb": 64, 00:14:25.088 "state": "online", 00:14:25.088 "raid_level": "concat", 00:14:25.088 "superblock": true, 00:14:25.088 "num_base_bdevs": 4, 00:14:25.088 "num_base_bdevs_discovered": 4, 00:14:25.088 "num_base_bdevs_operational": 4, 00:14:25.088 "base_bdevs_list": [ 00:14:25.088 { 00:14:25.088 "name": "pt1", 00:14:25.088 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:25.088 "is_configured": true, 00:14:25.088 "data_offset": 2048, 00:14:25.088 "data_size": 63488 00:14:25.088 }, 00:14:25.088 { 00:14:25.088 "name": "pt2", 00:14:25.088 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:25.088 "is_configured": true, 00:14:25.088 "data_offset": 2048, 00:14:25.088 "data_size": 63488 00:14:25.088 }, 00:14:25.088 { 00:14:25.088 "name": "pt3", 00:14:25.088 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:25.088 "is_configured": true, 00:14:25.088 "data_offset": 2048, 00:14:25.088 "data_size": 63488 00:14:25.088 }, 00:14:25.088 { 00:14:25.088 "name": "pt4", 00:14:25.088 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:25.088 "is_configured": true, 00:14:25.088 "data_offset": 2048, 00:14:25.088 "data_size": 63488 00:14:25.088 } 00:14:25.088 ] 00:14:25.088 } 00:14:25.088 } 00:14:25.088 }' 00:14:25.088 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:25.088 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:25.088 pt2 00:14:25.088 pt3 00:14:25.088 pt4' 00:14:25.088 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:25.088 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:25.088 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:25.088 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:25.088 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.088 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.088 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:25.088 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.088 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:25.088 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:25.088 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:25.088 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:25.088 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.088 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.088 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:25.088 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.345 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:25.345 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:25.345 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:25.345 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:25.345 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:25.345 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.345 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.345 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.345 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:25.346 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:25.346 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:25.346 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:25.346 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:25.346 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.346 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.346 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.346 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:25.346 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:25.346 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:25.346 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:25.346 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.346 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.346 [2024-12-06 18:12:50.729867] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:25.346 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.346 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5ae4b38f-60e4-48f2-87fd-ba22b390c7f7 00:14:25.346 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5ae4b38f-60e4-48f2-87fd-ba22b390c7f7 ']' 00:14:25.346 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:25.346 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.346 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.346 [2024-12-06 18:12:50.773458] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:25.346 [2024-12-06 18:12:50.773605] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:25.346 [2024-12-06 18:12:50.773728] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:25.346 [2024-12-06 18:12:50.773842] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:25.346 [2024-12-06 18:12:50.773868] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:25.346 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.346 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.346 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.346 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:25.346 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.346 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.346 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:25.346 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:25.346 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:25.346 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:25.346 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.346 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.346 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.346 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:25.346 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:25.346 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.346 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.346 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.346 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:25.346 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:25.346 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.346 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.346 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.346 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:25.346 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:14:25.346 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.346 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.606 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.606 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:25.606 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.606 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.606 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:25.606 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.606 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:25.606 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:25.606 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:14:25.606 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:25.606 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:25.606 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:25.606 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:25.606 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:25.606 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:25.606 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.606 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.606 [2024-12-06 18:12:50.925528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:25.606 [2024-12-06 18:12:50.927935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:25.606 [2024-12-06 18:12:50.928138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:25.606 [2024-12-06 18:12:50.928206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:14:25.606 [2024-12-06 18:12:50.928284] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:25.606 [2024-12-06 18:12:50.928361] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:25.606 [2024-12-06 18:12:50.928395] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:25.606 [2024-12-06 18:12:50.928427] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:14:25.606 [2024-12-06 18:12:50.928450] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:25.606 [2024-12-06 18:12:50.928467] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:25.606 request: 00:14:25.606 { 00:14:25.606 "name": "raid_bdev1", 00:14:25.606 "raid_level": "concat", 00:14:25.606 "base_bdevs": [ 00:14:25.606 "malloc1", 00:14:25.606 "malloc2", 00:14:25.606 "malloc3", 00:14:25.606 "malloc4" 00:14:25.606 ], 00:14:25.606 "strip_size_kb": 64, 00:14:25.606 "superblock": false, 00:14:25.606 "method": "bdev_raid_create", 00:14:25.606 "req_id": 1 00:14:25.606 } 00:14:25.606 Got JSON-RPC error response 00:14:25.606 response: 00:14:25.606 { 00:14:25.606 "code": -17, 00:14:25.606 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:25.606 } 00:14:25.606 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:25.606 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:14:25.606 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:25.606 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:25.606 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:25.606 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.606 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:25.606 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.606 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.606 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.606 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:25.606 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:25.607 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:25.607 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.607 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.607 [2024-12-06 18:12:50.993516] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:25.607 [2024-12-06 18:12:50.993705] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:25.607 [2024-12-06 18:12:50.993790] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:25.607 [2024-12-06 18:12:50.993942] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:25.607 [2024-12-06 18:12:50.996788] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:25.607 [2024-12-06 18:12:50.996953] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:25.607 [2024-12-06 18:12:50.997157] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:25.607 [2024-12-06 18:12:50.997247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:25.607 pt1 00:14:25.607 18:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.607 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:14:25.607 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:25.607 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:25.607 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:25.607 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:25.607 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:25.607 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.607 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.607 18:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.607 18:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.607 18:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.607 18:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.607 18:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.607 18:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.607 18:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.607 18:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.607 "name": "raid_bdev1", 00:14:25.607 "uuid": "5ae4b38f-60e4-48f2-87fd-ba22b390c7f7", 00:14:25.607 "strip_size_kb": 64, 00:14:25.607 "state": "configuring", 00:14:25.607 "raid_level": "concat", 00:14:25.607 "superblock": true, 00:14:25.607 "num_base_bdevs": 4, 00:14:25.607 "num_base_bdevs_discovered": 1, 00:14:25.607 "num_base_bdevs_operational": 4, 00:14:25.607 "base_bdevs_list": [ 00:14:25.607 { 00:14:25.607 "name": "pt1", 00:14:25.607 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:25.607 "is_configured": true, 00:14:25.607 "data_offset": 2048, 00:14:25.607 "data_size": 63488 00:14:25.607 }, 00:14:25.607 { 00:14:25.607 "name": null, 00:14:25.607 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:25.607 "is_configured": false, 00:14:25.607 "data_offset": 2048, 00:14:25.607 "data_size": 63488 00:14:25.607 }, 00:14:25.607 { 00:14:25.607 "name": null, 00:14:25.607 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:25.607 "is_configured": false, 00:14:25.607 "data_offset": 2048, 00:14:25.607 "data_size": 63488 00:14:25.607 }, 00:14:25.607 { 00:14:25.607 "name": null, 00:14:25.607 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:25.607 "is_configured": false, 00:14:25.607 "data_offset": 2048, 00:14:25.607 "data_size": 63488 00:14:25.607 } 00:14:25.607 ] 00:14:25.607 }' 00:14:25.607 18:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.607 18:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.188 18:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:14:26.188 18:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:26.188 18:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.188 18:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.188 [2024-12-06 18:12:51.513674] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:26.188 [2024-12-06 18:12:51.513780] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:26.188 [2024-12-06 18:12:51.513812] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:26.188 [2024-12-06 18:12:51.513831] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:26.188 [2024-12-06 18:12:51.514373] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:26.188 [2024-12-06 18:12:51.514410] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:26.188 [2024-12-06 18:12:51.514512] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:26.188 [2024-12-06 18:12:51.514550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:26.188 pt2 00:14:26.188 18:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.188 18:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:26.188 18:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.188 18:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.188 [2024-12-06 18:12:51.525695] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:26.188 18:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.188 18:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:14:26.188 18:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:26.188 18:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:26.188 18:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:26.188 18:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:26.188 18:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:26.188 18:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.188 18:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.188 18:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.188 18:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.188 18:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.188 18:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.188 18:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.188 18:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.188 18:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.188 18:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.188 "name": "raid_bdev1", 00:14:26.188 "uuid": "5ae4b38f-60e4-48f2-87fd-ba22b390c7f7", 00:14:26.188 "strip_size_kb": 64, 00:14:26.188 "state": "configuring", 00:14:26.188 "raid_level": "concat", 00:14:26.188 "superblock": true, 00:14:26.188 "num_base_bdevs": 4, 00:14:26.188 "num_base_bdevs_discovered": 1, 00:14:26.188 "num_base_bdevs_operational": 4, 00:14:26.188 "base_bdevs_list": [ 00:14:26.188 { 00:14:26.188 "name": "pt1", 00:14:26.188 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:26.188 "is_configured": true, 00:14:26.188 "data_offset": 2048, 00:14:26.188 "data_size": 63488 00:14:26.188 }, 00:14:26.188 { 00:14:26.188 "name": null, 00:14:26.188 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:26.188 "is_configured": false, 00:14:26.188 "data_offset": 0, 00:14:26.188 "data_size": 63488 00:14:26.188 }, 00:14:26.188 { 00:14:26.189 "name": null, 00:14:26.189 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:26.189 "is_configured": false, 00:14:26.189 "data_offset": 2048, 00:14:26.189 "data_size": 63488 00:14:26.189 }, 00:14:26.189 { 00:14:26.189 "name": null, 00:14:26.189 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:26.189 "is_configured": false, 00:14:26.189 "data_offset": 2048, 00:14:26.189 "data_size": 63488 00:14:26.189 } 00:14:26.189 ] 00:14:26.189 }' 00:14:26.189 18:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.189 18:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.757 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:26.757 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:26.757 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:26.757 18:12:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.757 18:12:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.757 [2024-12-06 18:12:52.089831] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:26.757 [2024-12-06 18:12:52.089907] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:26.757 [2024-12-06 18:12:52.089938] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:26.757 [2024-12-06 18:12:52.089954] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:26.757 [2024-12-06 18:12:52.090486] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:26.757 [2024-12-06 18:12:52.090528] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:26.757 [2024-12-06 18:12:52.090631] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:26.757 [2024-12-06 18:12:52.090677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:26.757 pt2 00:14:26.757 18:12:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.757 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:26.757 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:26.757 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:26.757 18:12:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.757 18:12:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.757 [2024-12-06 18:12:52.097789] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:26.757 [2024-12-06 18:12:52.097975] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:26.757 [2024-12-06 18:12:52.098013] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:26.757 [2024-12-06 18:12:52.098028] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:26.757 [2024-12-06 18:12:52.098488] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:26.757 [2024-12-06 18:12:52.098521] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:26.757 [2024-12-06 18:12:52.098606] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:26.757 [2024-12-06 18:12:52.098656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:26.757 pt3 00:14:26.757 18:12:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.757 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:26.757 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:26.757 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:26.757 18:12:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.757 18:12:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.757 [2024-12-06 18:12:52.105786] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:26.757 [2024-12-06 18:12:52.105844] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:26.757 [2024-12-06 18:12:52.105872] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:26.757 [2024-12-06 18:12:52.105886] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:26.757 [2024-12-06 18:12:52.106372] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:26.757 [2024-12-06 18:12:52.106412] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:26.757 [2024-12-06 18:12:52.106502] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:26.757 [2024-12-06 18:12:52.106537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:26.757 [2024-12-06 18:12:52.106722] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:26.757 [2024-12-06 18:12:52.106737] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:26.757 [2024-12-06 18:12:52.107064] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:26.757 [2024-12-06 18:12:52.107259] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:26.757 [2024-12-06 18:12:52.107281] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:26.757 [2024-12-06 18:12:52.107434] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.757 pt4 00:14:26.757 18:12:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.757 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:26.757 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:26.757 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:14:26.757 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:26.757 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.757 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:26.757 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:26.757 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:26.757 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.757 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.757 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.757 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.757 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.757 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.757 18:12:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.757 18:12:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.757 18:12:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.757 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.757 "name": "raid_bdev1", 00:14:26.757 "uuid": "5ae4b38f-60e4-48f2-87fd-ba22b390c7f7", 00:14:26.757 "strip_size_kb": 64, 00:14:26.757 "state": "online", 00:14:26.757 "raid_level": "concat", 00:14:26.757 "superblock": true, 00:14:26.757 "num_base_bdevs": 4, 00:14:26.757 "num_base_bdevs_discovered": 4, 00:14:26.757 "num_base_bdevs_operational": 4, 00:14:26.757 "base_bdevs_list": [ 00:14:26.757 { 00:14:26.757 "name": "pt1", 00:14:26.757 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:26.757 "is_configured": true, 00:14:26.757 "data_offset": 2048, 00:14:26.757 "data_size": 63488 00:14:26.757 }, 00:14:26.757 { 00:14:26.757 "name": "pt2", 00:14:26.757 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:26.757 "is_configured": true, 00:14:26.757 "data_offset": 2048, 00:14:26.757 "data_size": 63488 00:14:26.757 }, 00:14:26.757 { 00:14:26.757 "name": "pt3", 00:14:26.757 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:26.757 "is_configured": true, 00:14:26.758 "data_offset": 2048, 00:14:26.758 "data_size": 63488 00:14:26.758 }, 00:14:26.758 { 00:14:26.758 "name": "pt4", 00:14:26.758 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:26.758 "is_configured": true, 00:14:26.758 "data_offset": 2048, 00:14:26.758 "data_size": 63488 00:14:26.758 } 00:14:26.758 ] 00:14:26.758 }' 00:14:26.758 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.758 18:12:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.330 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:27.330 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:27.330 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:27.330 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:27.330 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:27.330 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:27.330 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:27.330 18:12:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.330 18:12:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.330 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:27.330 [2024-12-06 18:12:52.618372] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:27.330 18:12:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.330 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:27.330 "name": "raid_bdev1", 00:14:27.330 "aliases": [ 00:14:27.330 "5ae4b38f-60e4-48f2-87fd-ba22b390c7f7" 00:14:27.330 ], 00:14:27.330 "product_name": "Raid Volume", 00:14:27.330 "block_size": 512, 00:14:27.330 "num_blocks": 253952, 00:14:27.330 "uuid": "5ae4b38f-60e4-48f2-87fd-ba22b390c7f7", 00:14:27.330 "assigned_rate_limits": { 00:14:27.330 "rw_ios_per_sec": 0, 00:14:27.330 "rw_mbytes_per_sec": 0, 00:14:27.330 "r_mbytes_per_sec": 0, 00:14:27.330 "w_mbytes_per_sec": 0 00:14:27.330 }, 00:14:27.330 "claimed": false, 00:14:27.330 "zoned": false, 00:14:27.330 "supported_io_types": { 00:14:27.330 "read": true, 00:14:27.330 "write": true, 00:14:27.330 "unmap": true, 00:14:27.330 "flush": true, 00:14:27.330 "reset": true, 00:14:27.330 "nvme_admin": false, 00:14:27.330 "nvme_io": false, 00:14:27.330 "nvme_io_md": false, 00:14:27.330 "write_zeroes": true, 00:14:27.330 "zcopy": false, 00:14:27.330 "get_zone_info": false, 00:14:27.330 "zone_management": false, 00:14:27.330 "zone_append": false, 00:14:27.330 "compare": false, 00:14:27.330 "compare_and_write": false, 00:14:27.330 "abort": false, 00:14:27.330 "seek_hole": false, 00:14:27.330 "seek_data": false, 00:14:27.330 "copy": false, 00:14:27.330 "nvme_iov_md": false 00:14:27.330 }, 00:14:27.330 "memory_domains": [ 00:14:27.330 { 00:14:27.330 "dma_device_id": "system", 00:14:27.330 "dma_device_type": 1 00:14:27.330 }, 00:14:27.330 { 00:14:27.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.330 "dma_device_type": 2 00:14:27.330 }, 00:14:27.330 { 00:14:27.330 "dma_device_id": "system", 00:14:27.330 "dma_device_type": 1 00:14:27.330 }, 00:14:27.330 { 00:14:27.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.330 "dma_device_type": 2 00:14:27.330 }, 00:14:27.330 { 00:14:27.330 "dma_device_id": "system", 00:14:27.330 "dma_device_type": 1 00:14:27.330 }, 00:14:27.330 { 00:14:27.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.330 "dma_device_type": 2 00:14:27.330 }, 00:14:27.330 { 00:14:27.330 "dma_device_id": "system", 00:14:27.330 "dma_device_type": 1 00:14:27.330 }, 00:14:27.330 { 00:14:27.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.330 "dma_device_type": 2 00:14:27.330 } 00:14:27.330 ], 00:14:27.330 "driver_specific": { 00:14:27.330 "raid": { 00:14:27.330 "uuid": "5ae4b38f-60e4-48f2-87fd-ba22b390c7f7", 00:14:27.330 "strip_size_kb": 64, 00:14:27.330 "state": "online", 00:14:27.330 "raid_level": "concat", 00:14:27.330 "superblock": true, 00:14:27.330 "num_base_bdevs": 4, 00:14:27.330 "num_base_bdevs_discovered": 4, 00:14:27.330 "num_base_bdevs_operational": 4, 00:14:27.330 "base_bdevs_list": [ 00:14:27.330 { 00:14:27.330 "name": "pt1", 00:14:27.330 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:27.330 "is_configured": true, 00:14:27.330 "data_offset": 2048, 00:14:27.330 "data_size": 63488 00:14:27.330 }, 00:14:27.330 { 00:14:27.330 "name": "pt2", 00:14:27.330 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:27.330 "is_configured": true, 00:14:27.330 "data_offset": 2048, 00:14:27.330 "data_size": 63488 00:14:27.330 }, 00:14:27.330 { 00:14:27.330 "name": "pt3", 00:14:27.330 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:27.330 "is_configured": true, 00:14:27.330 "data_offset": 2048, 00:14:27.330 "data_size": 63488 00:14:27.330 }, 00:14:27.330 { 00:14:27.330 "name": "pt4", 00:14:27.330 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:27.330 "is_configured": true, 00:14:27.330 "data_offset": 2048, 00:14:27.330 "data_size": 63488 00:14:27.330 } 00:14:27.330 ] 00:14:27.330 } 00:14:27.330 } 00:14:27.330 }' 00:14:27.330 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:27.330 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:27.330 pt2 00:14:27.330 pt3 00:14:27.330 pt4' 00:14:27.330 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.330 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:27.330 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:27.330 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:27.330 18:12:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.330 18:12:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.330 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.330 18:12:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.330 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:27.330 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:27.330 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:27.330 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:27.330 18:12:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.330 18:12:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.330 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.330 18:12:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.589 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:27.589 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:27.589 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:27.589 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:27.589 18:12:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.589 18:12:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.589 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.589 18:12:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.589 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:27.589 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:27.589 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:27.589 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:27.589 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.589 18:12:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.589 18:12:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.589 18:12:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.589 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:27.589 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:27.589 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:27.589 18:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:27.589 18:12:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.589 18:12:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.589 [2024-12-06 18:12:52.970416] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:27.589 18:12:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.589 18:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5ae4b38f-60e4-48f2-87fd-ba22b390c7f7 '!=' 5ae4b38f-60e4-48f2-87fd-ba22b390c7f7 ']' 00:14:27.589 18:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:14:27.590 18:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:27.590 18:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:27.590 18:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72817 00:14:27.590 18:12:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72817 ']' 00:14:27.590 18:12:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72817 00:14:27.590 18:12:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:14:27.590 18:12:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:27.590 18:12:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72817 00:14:27.590 killing process with pid 72817 00:14:27.590 18:12:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:27.590 18:12:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:27.590 18:12:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72817' 00:14:27.590 18:12:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72817 00:14:27.590 [2024-12-06 18:12:53.049339] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:27.590 18:12:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72817 00:14:27.590 [2024-12-06 18:12:53.049445] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:27.590 [2024-12-06 18:12:53.049544] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:27.590 [2024-12-06 18:12:53.049560] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:28.157 [2024-12-06 18:12:53.409449] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:29.095 18:12:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:29.095 00:14:29.095 real 0m5.958s 00:14:29.095 user 0m8.993s 00:14:29.095 sys 0m0.828s 00:14:29.095 18:12:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:29.095 ************************************ 00:14:29.095 END TEST raid_superblock_test 00:14:29.095 ************************************ 00:14:29.095 18:12:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.095 18:12:54 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:14:29.095 18:12:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:29.095 18:12:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:29.095 18:12:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:29.095 ************************************ 00:14:29.095 START TEST raid_read_error_test 00:14:29.095 ************************************ 00:14:29.095 18:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:14:29.095 18:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:14:29.095 18:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:14:29.095 18:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:14:29.095 18:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:29.095 18:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:29.095 18:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:29.095 18:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:29.095 18:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:29.095 18:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:29.095 18:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:29.095 18:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:29.095 18:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:29.095 18:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:29.095 18:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:29.095 18:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:14:29.095 18:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:29.095 18:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:29.095 18:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:29.095 18:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:29.095 18:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:29.095 18:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:29.095 18:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:29.095 18:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:29.095 18:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:29.095 18:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:14:29.095 18:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:29.095 18:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:29.095 18:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:29.095 18:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.hfi09qfDkR 00:14:29.095 18:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73086 00:14:29.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.096 18:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73086 00:14:29.096 18:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:29.096 18:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 73086 ']' 00:14:29.096 18:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.096 18:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:29.096 18:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.096 18:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:29.096 18:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.096 [2024-12-06 18:12:54.610178] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:14:29.096 [2024-12-06 18:12:54.610550] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73086 ] 00:14:29.355 [2024-12-06 18:12:54.794677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.615 [2024-12-06 18:12:54.926408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.615 [2024-12-06 18:12:55.130990] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:29.615 [2024-12-06 18:12:55.131272] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:30.229 18:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:30.229 18:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:30.229 18:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:30.229 18:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:30.229 18:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.229 18:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.229 BaseBdev1_malloc 00:14:30.229 18:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.229 18:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:30.229 18:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.229 18:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.229 true 00:14:30.229 18:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.229 18:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:30.229 18:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.229 18:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.229 [2024-12-06 18:12:55.668699] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:30.229 [2024-12-06 18:12:55.668806] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.229 [2024-12-06 18:12:55.668843] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:30.229 [2024-12-06 18:12:55.668866] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.229 [2024-12-06 18:12:55.671848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.229 [2024-12-06 18:12:55.671906] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:30.229 BaseBdev1 00:14:30.229 18:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.229 18:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:30.229 18:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:30.229 18:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.229 18:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.229 BaseBdev2_malloc 00:14:30.229 18:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.229 18:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:30.229 18:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.229 18:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.229 true 00:14:30.229 18:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.229 18:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:30.229 18:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.229 18:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.229 [2024-12-06 18:12:55.726037] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:30.229 [2024-12-06 18:12:55.726140] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.229 [2024-12-06 18:12:55.726187] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:30.229 [2024-12-06 18:12:55.726224] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.229 [2024-12-06 18:12:55.729168] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.229 [2024-12-06 18:12:55.729238] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:30.229 BaseBdev2 00:14:30.230 18:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.230 18:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:30.230 18:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:30.230 18:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.230 18:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.489 BaseBdev3_malloc 00:14:30.489 18:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.489 18:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:30.489 18:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.489 18:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.489 true 00:14:30.489 18:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.489 18:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:30.489 18:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.489 18:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.489 [2024-12-06 18:12:55.790567] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:30.489 [2024-12-06 18:12:55.790684] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.489 [2024-12-06 18:12:55.790717] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:30.489 [2024-12-06 18:12:55.790739] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.489 [2024-12-06 18:12:55.793728] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.489 [2024-12-06 18:12:55.794002] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:30.489 BaseBdev3 00:14:30.489 18:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.489 18:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:30.489 18:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:30.489 18:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.489 18:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.489 BaseBdev4_malloc 00:14:30.489 18:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.489 18:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:14:30.489 18:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.489 18:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.489 true 00:14:30.489 18:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.489 18:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:30.489 18:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.489 18:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.489 [2024-12-06 18:12:55.845245] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:30.489 [2024-12-06 18:12:55.845513] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.489 [2024-12-06 18:12:55.845557] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:30.489 [2024-12-06 18:12:55.845614] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.489 [2024-12-06 18:12:55.848925] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.489 [2024-12-06 18:12:55.849013] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:30.489 BaseBdev4 00:14:30.489 18:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.489 18:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:14:30.489 18:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.489 18:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.489 [2024-12-06 18:12:55.853449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:30.489 [2024-12-06 18:12:55.856265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:30.489 [2024-12-06 18:12:55.856567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:30.489 [2024-12-06 18:12:55.856698] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:30.489 [2024-12-06 18:12:55.857051] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:14:30.489 [2024-12-06 18:12:55.857081] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:30.489 [2024-12-06 18:12:55.857445] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:14:30.489 [2024-12-06 18:12:55.857710] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:14:30.489 [2024-12-06 18:12:55.857731] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:14:30.489 [2024-12-06 18:12:55.858021] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:30.489 18:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.489 18:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:14:30.489 18:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:30.489 18:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:30.489 18:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:30.489 18:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:30.489 18:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:30.489 18:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.489 18:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.489 18:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.489 18:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.489 18:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.489 18:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.489 18:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.489 18:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.489 18:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.489 18:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.489 "name": "raid_bdev1", 00:14:30.489 "uuid": "17ee8fcd-d0ce-4e11-9820-945b084f07e7", 00:14:30.489 "strip_size_kb": 64, 00:14:30.489 "state": "online", 00:14:30.489 "raid_level": "concat", 00:14:30.489 "superblock": true, 00:14:30.489 "num_base_bdevs": 4, 00:14:30.489 "num_base_bdevs_discovered": 4, 00:14:30.489 "num_base_bdevs_operational": 4, 00:14:30.489 "base_bdevs_list": [ 00:14:30.489 { 00:14:30.489 "name": "BaseBdev1", 00:14:30.489 "uuid": "fbd54bd8-6fcd-55d1-ad8f-1ca52274aefd", 00:14:30.489 "is_configured": true, 00:14:30.489 "data_offset": 2048, 00:14:30.489 "data_size": 63488 00:14:30.489 }, 00:14:30.489 { 00:14:30.489 "name": "BaseBdev2", 00:14:30.489 "uuid": "fd2bb6a3-05ca-59da-9f0b-4e173e87bc71", 00:14:30.489 "is_configured": true, 00:14:30.489 "data_offset": 2048, 00:14:30.489 "data_size": 63488 00:14:30.489 }, 00:14:30.489 { 00:14:30.489 "name": "BaseBdev3", 00:14:30.489 "uuid": "f8f8cd44-09bc-57f2-835b-343846b8aea1", 00:14:30.489 "is_configured": true, 00:14:30.489 "data_offset": 2048, 00:14:30.489 "data_size": 63488 00:14:30.489 }, 00:14:30.489 { 00:14:30.489 "name": "BaseBdev4", 00:14:30.489 "uuid": "bc2afaeb-615a-5178-b3a5-666413ed4844", 00:14:30.489 "is_configured": true, 00:14:30.489 "data_offset": 2048, 00:14:30.489 "data_size": 63488 00:14:30.489 } 00:14:30.489 ] 00:14:30.489 }' 00:14:30.489 18:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.489 18:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.056 18:12:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:31.056 18:12:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:31.056 [2024-12-06 18:12:56.507672] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:14:31.992 18:12:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:31.992 18:12:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.992 18:12:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.992 18:12:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.992 18:12:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:31.992 18:12:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:14:31.992 18:12:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:14:31.992 18:12:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:14:31.992 18:12:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.992 18:12:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.992 18:12:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:31.992 18:12:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.993 18:12:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:31.993 18:12:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.993 18:12:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.993 18:12:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.993 18:12:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.993 18:12:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.993 18:12:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.993 18:12:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.993 18:12:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.993 18:12:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.993 18:12:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.993 "name": "raid_bdev1", 00:14:31.993 "uuid": "17ee8fcd-d0ce-4e11-9820-945b084f07e7", 00:14:31.993 "strip_size_kb": 64, 00:14:31.993 "state": "online", 00:14:31.993 "raid_level": "concat", 00:14:31.993 "superblock": true, 00:14:31.993 "num_base_bdevs": 4, 00:14:31.993 "num_base_bdevs_discovered": 4, 00:14:31.993 "num_base_bdevs_operational": 4, 00:14:31.993 "base_bdevs_list": [ 00:14:31.993 { 00:14:31.993 "name": "BaseBdev1", 00:14:31.993 "uuid": "fbd54bd8-6fcd-55d1-ad8f-1ca52274aefd", 00:14:31.993 "is_configured": true, 00:14:31.993 "data_offset": 2048, 00:14:31.993 "data_size": 63488 00:14:31.993 }, 00:14:31.993 { 00:14:31.993 "name": "BaseBdev2", 00:14:31.993 "uuid": "fd2bb6a3-05ca-59da-9f0b-4e173e87bc71", 00:14:31.993 "is_configured": true, 00:14:31.993 "data_offset": 2048, 00:14:31.993 "data_size": 63488 00:14:31.993 }, 00:14:31.993 { 00:14:31.993 "name": "BaseBdev3", 00:14:31.993 "uuid": "f8f8cd44-09bc-57f2-835b-343846b8aea1", 00:14:31.993 "is_configured": true, 00:14:31.993 "data_offset": 2048, 00:14:31.993 "data_size": 63488 00:14:31.993 }, 00:14:31.993 { 00:14:31.993 "name": "BaseBdev4", 00:14:31.993 "uuid": "bc2afaeb-615a-5178-b3a5-666413ed4844", 00:14:31.993 "is_configured": true, 00:14:31.993 "data_offset": 2048, 00:14:31.993 "data_size": 63488 00:14:31.993 } 00:14:31.993 ] 00:14:31.993 }' 00:14:31.993 18:12:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.993 18:12:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.560 18:12:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:32.560 18:12:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.560 18:12:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.560 [2024-12-06 18:12:57.906022] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:32.560 [2024-12-06 18:12:57.906103] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:32.560 [2024-12-06 18:12:57.909889] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:32.560 [2024-12-06 18:12:57.910123] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:32.560 [2024-12-06 18:12:57.910236] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:32.560 [2024-12-06 18:12:57.910267] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:14:32.560 { 00:14:32.560 "results": [ 00:14:32.560 { 00:14:32.560 "job": "raid_bdev1", 00:14:32.560 "core_mask": "0x1", 00:14:32.560 "workload": "randrw", 00:14:32.560 "percentage": 50, 00:14:32.560 "status": "finished", 00:14:32.560 "queue_depth": 1, 00:14:32.560 "io_size": 131072, 00:14:32.560 "runtime": 1.395806, 00:14:32.560 "iops": 9720.548557607575, 00:14:32.560 "mibps": 1215.068569700947, 00:14:32.560 "io_failed": 1, 00:14:32.560 "io_timeout": 0, 00:14:32.560 "avg_latency_us": 142.63001949631177, 00:14:32.560 "min_latency_us": 39.33090909090909, 00:14:32.560 "max_latency_us": 1980.9745454545455 00:14:32.560 } 00:14:32.560 ], 00:14:32.560 "core_count": 1 00:14:32.560 } 00:14:32.560 18:12:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.560 18:12:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73086 00:14:32.560 18:12:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 73086 ']' 00:14:32.561 18:12:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 73086 00:14:32.561 18:12:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:14:32.561 18:12:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:32.561 18:12:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73086 00:14:32.561 18:12:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:32.561 18:12:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:32.561 killing process with pid 73086 00:14:32.561 18:12:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73086' 00:14:32.561 18:12:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 73086 00:14:32.561 [2024-12-06 18:12:57.940536] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:32.561 18:12:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 73086 00:14:32.819 [2024-12-06 18:12:58.216455] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:34.195 18:12:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.hfi09qfDkR 00:14:34.195 18:12:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:34.195 18:12:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:34.196 18:12:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:14:34.196 18:12:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:14:34.196 18:12:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:34.196 18:12:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:34.196 ************************************ 00:14:34.196 END TEST raid_read_error_test 00:14:34.196 ************************************ 00:14:34.196 18:12:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:14:34.196 00:14:34.196 real 0m4.814s 00:14:34.196 user 0m6.003s 00:14:34.196 sys 0m0.549s 00:14:34.196 18:12:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:34.196 18:12:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.196 18:12:59 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:14:34.196 18:12:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:34.196 18:12:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:34.196 18:12:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:34.196 ************************************ 00:14:34.196 START TEST raid_write_error_test 00:14:34.196 ************************************ 00:14:34.196 18:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:14:34.196 18:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:14:34.196 18:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:14:34.196 18:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:14:34.196 18:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:34.196 18:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:34.196 18:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:34.196 18:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:34.196 18:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:34.196 18:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:34.196 18:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:34.196 18:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:34.196 18:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:34.196 18:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:34.196 18:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:34.196 18:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:14:34.196 18:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:34.196 18:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:34.196 18:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:34.196 18:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:34.196 18:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:34.196 18:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:34.196 18:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:34.196 18:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:34.196 18:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:34.196 18:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:14:34.196 18:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:34.196 18:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:34.196 18:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:34.196 18:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.3CSsvohovl 00:14:34.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:34.196 18:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73233 00:14:34.196 18:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73233 00:14:34.196 18:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:34.196 18:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73233 ']' 00:14:34.196 18:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:34.196 18:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:34.196 18:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:34.196 18:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:34.196 18:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.196 [2024-12-06 18:12:59.475792] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:14:34.196 [2024-12-06 18:12:59.476243] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73233 ] 00:14:34.196 [2024-12-06 18:12:59.657714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.454 [2024-12-06 18:12:59.789941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.713 [2024-12-06 18:12:59.981036] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:34.713 [2024-12-06 18:12:59.981083] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:34.971 18:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:34.972 18:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:34.972 18:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:34.972 18:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:34.972 18:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.972 18:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.972 BaseBdev1_malloc 00:14:34.972 18:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.972 18:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:34.972 18:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.972 18:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.231 true 00:14:35.231 18:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.231 18:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:35.231 18:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.231 18:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.231 [2024-12-06 18:13:00.499681] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:35.231 [2024-12-06 18:13:00.499979] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.231 [2024-12-06 18:13:00.500041] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:35.231 [2024-12-06 18:13:00.500074] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.231 [2024-12-06 18:13:00.502956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.231 [2024-12-06 18:13:00.503149] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:35.231 BaseBdev1 00:14:35.231 18:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.231 18:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:35.231 18:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:35.231 18:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.231 18:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.231 BaseBdev2_malloc 00:14:35.231 18:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.231 18:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:35.231 18:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.231 18:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.231 true 00:14:35.231 18:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.231 18:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:35.231 18:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.231 18:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.231 [2024-12-06 18:13:00.554442] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:35.231 [2024-12-06 18:13:00.554537] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.231 [2024-12-06 18:13:00.554566] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:35.231 [2024-12-06 18:13:00.554587] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.231 [2024-12-06 18:13:00.557600] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.231 [2024-12-06 18:13:00.557657] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:35.231 BaseBdev2 00:14:35.231 18:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.231 18:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:35.231 18:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:35.231 18:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.231 18:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.231 BaseBdev3_malloc 00:14:35.231 18:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.232 18:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:35.232 18:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.232 18:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.232 true 00:14:35.232 18:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.232 18:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:35.232 18:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.232 18:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.232 [2024-12-06 18:13:00.630067] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:35.232 [2024-12-06 18:13:00.630173] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.232 [2024-12-06 18:13:00.630207] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:35.232 [2024-12-06 18:13:00.630228] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.232 [2024-12-06 18:13:00.633297] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.232 [2024-12-06 18:13:00.633364] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:35.232 BaseBdev3 00:14:35.232 18:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.232 18:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:35.232 18:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:35.232 18:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.232 18:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.232 BaseBdev4_malloc 00:14:35.232 18:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.232 18:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:14:35.232 18:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.232 18:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.232 true 00:14:35.232 18:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.232 18:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:35.232 18:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.232 18:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.232 [2024-12-06 18:13:00.688849] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:35.232 [2024-12-06 18:13:00.688938] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.232 [2024-12-06 18:13:00.688985] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:35.232 [2024-12-06 18:13:00.689006] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.232 [2024-12-06 18:13:00.691816] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.232 [2024-12-06 18:13:00.691881] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:35.232 BaseBdev4 00:14:35.232 18:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.232 18:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:14:35.232 18:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.232 18:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.232 [2024-12-06 18:13:00.696975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:35.232 [2024-12-06 18:13:00.699483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:35.232 [2024-12-06 18:13:00.699787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:35.232 [2024-12-06 18:13:00.699916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:35.232 [2024-12-06 18:13:00.700261] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:14:35.232 [2024-12-06 18:13:00.700290] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:35.232 [2024-12-06 18:13:00.700645] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:14:35.232 [2024-12-06 18:13:00.700920] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:14:35.232 [2024-12-06 18:13:00.700958] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:14:35.232 [2024-12-06 18:13:00.701261] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:35.232 18:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.232 18:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:14:35.232 18:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:35.232 18:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:35.232 18:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:35.232 18:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:35.232 18:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:35.232 18:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.232 18:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.232 18:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.232 18:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.232 18:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.232 18:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.232 18:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.232 18:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.232 18:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.491 18:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.491 "name": "raid_bdev1", 00:14:35.491 "uuid": "dff1d487-47f9-4c33-bead-af05129f0ebb", 00:14:35.491 "strip_size_kb": 64, 00:14:35.491 "state": "online", 00:14:35.491 "raid_level": "concat", 00:14:35.491 "superblock": true, 00:14:35.491 "num_base_bdevs": 4, 00:14:35.491 "num_base_bdevs_discovered": 4, 00:14:35.491 "num_base_bdevs_operational": 4, 00:14:35.491 "base_bdevs_list": [ 00:14:35.491 { 00:14:35.491 "name": "BaseBdev1", 00:14:35.491 "uuid": "99832c40-7b41-51f3-84ec-969d82a74018", 00:14:35.491 "is_configured": true, 00:14:35.491 "data_offset": 2048, 00:14:35.491 "data_size": 63488 00:14:35.491 }, 00:14:35.491 { 00:14:35.491 "name": "BaseBdev2", 00:14:35.491 "uuid": "12f1458c-5d56-5853-86f8-f3fa2cda158a", 00:14:35.491 "is_configured": true, 00:14:35.491 "data_offset": 2048, 00:14:35.491 "data_size": 63488 00:14:35.491 }, 00:14:35.491 { 00:14:35.491 "name": "BaseBdev3", 00:14:35.491 "uuid": "af356c18-a986-511e-b4b6-0bd8332afa94", 00:14:35.491 "is_configured": true, 00:14:35.491 "data_offset": 2048, 00:14:35.491 "data_size": 63488 00:14:35.491 }, 00:14:35.491 { 00:14:35.491 "name": "BaseBdev4", 00:14:35.491 "uuid": "bcb4b5e0-5c17-59e9-81a2-528209d5bd9a", 00:14:35.491 "is_configured": true, 00:14:35.491 "data_offset": 2048, 00:14:35.491 "data_size": 63488 00:14:35.491 } 00:14:35.491 ] 00:14:35.491 }' 00:14:35.491 18:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.491 18:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.751 18:13:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:35.751 18:13:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:36.009 [2024-12-06 18:13:01.346927] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:14:36.947 18:13:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:36.947 18:13:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.947 18:13:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.947 18:13:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.947 18:13:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:36.947 18:13:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:14:36.947 18:13:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:14:36.947 18:13:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:14:36.947 18:13:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:36.947 18:13:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:36.947 18:13:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:36.947 18:13:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:36.947 18:13:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:36.947 18:13:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.947 18:13:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.947 18:13:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.947 18:13:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.947 18:13:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.947 18:13:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.947 18:13:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.947 18:13:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.947 18:13:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.947 18:13:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.947 "name": "raid_bdev1", 00:14:36.947 "uuid": "dff1d487-47f9-4c33-bead-af05129f0ebb", 00:14:36.947 "strip_size_kb": 64, 00:14:36.947 "state": "online", 00:14:36.947 "raid_level": "concat", 00:14:36.947 "superblock": true, 00:14:36.947 "num_base_bdevs": 4, 00:14:36.947 "num_base_bdevs_discovered": 4, 00:14:36.947 "num_base_bdevs_operational": 4, 00:14:36.947 "base_bdevs_list": [ 00:14:36.947 { 00:14:36.947 "name": "BaseBdev1", 00:14:36.947 "uuid": "99832c40-7b41-51f3-84ec-969d82a74018", 00:14:36.947 "is_configured": true, 00:14:36.947 "data_offset": 2048, 00:14:36.947 "data_size": 63488 00:14:36.947 }, 00:14:36.947 { 00:14:36.947 "name": "BaseBdev2", 00:14:36.947 "uuid": "12f1458c-5d56-5853-86f8-f3fa2cda158a", 00:14:36.947 "is_configured": true, 00:14:36.947 "data_offset": 2048, 00:14:36.947 "data_size": 63488 00:14:36.947 }, 00:14:36.947 { 00:14:36.947 "name": "BaseBdev3", 00:14:36.947 "uuid": "af356c18-a986-511e-b4b6-0bd8332afa94", 00:14:36.947 "is_configured": true, 00:14:36.947 "data_offset": 2048, 00:14:36.947 "data_size": 63488 00:14:36.947 }, 00:14:36.947 { 00:14:36.947 "name": "BaseBdev4", 00:14:36.947 "uuid": "bcb4b5e0-5c17-59e9-81a2-528209d5bd9a", 00:14:36.947 "is_configured": true, 00:14:36.947 "data_offset": 2048, 00:14:36.947 "data_size": 63488 00:14:36.947 } 00:14:36.947 ] 00:14:36.947 }' 00:14:36.947 18:13:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.947 18:13:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.514 18:13:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:37.514 18:13:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.514 18:13:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.514 [2024-12-06 18:13:02.777331] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:37.514 [2024-12-06 18:13:02.777587] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:37.514 [2024-12-06 18:13:02.781374] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:37.514 [2024-12-06 18:13:02.781666] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.514 [2024-12-06 18:13:02.781919] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:37.514 [2024-12-06 18:13:02.782102] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:14:37.514 { 00:14:37.514 "results": [ 00:14:37.514 { 00:14:37.514 "job": "raid_bdev1", 00:14:37.514 "core_mask": "0x1", 00:14:37.514 "workload": "randrw", 00:14:37.514 "percentage": 50, 00:14:37.514 "status": "finished", 00:14:37.514 "queue_depth": 1, 00:14:37.514 "io_size": 131072, 00:14:37.514 "runtime": 1.428002, 00:14:37.514 "iops": 9764.692206313437, 00:14:37.514 "mibps": 1220.5865257891796, 00:14:37.514 "io_failed": 1, 00:14:37.514 "io_timeout": 0, 00:14:37.514 "avg_latency_us": 142.06162547671045, 00:14:37.514 "min_latency_us": 42.123636363636365, 00:14:37.514 "max_latency_us": 1854.370909090909 00:14:37.514 } 00:14:37.514 ], 00:14:37.514 "core_count": 1 00:14:37.514 } 00:14:37.514 18:13:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.514 18:13:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73233 00:14:37.514 18:13:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73233 ']' 00:14:37.514 18:13:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73233 00:14:37.514 18:13:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:14:37.514 18:13:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:37.514 18:13:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73233 00:14:37.514 18:13:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:37.514 killing process with pid 73233 00:14:37.514 18:13:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:37.514 18:13:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73233' 00:14:37.514 18:13:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73233 00:14:37.514 [2024-12-06 18:13:02.820948] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:37.514 18:13:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73233 00:14:37.773 [2024-12-06 18:13:03.100299] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:38.713 18:13:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.3CSsvohovl 00:14:38.713 18:13:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:38.713 18:13:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:38.713 18:13:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:14:38.713 18:13:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:14:38.713 18:13:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:38.713 18:13:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:38.713 ************************************ 00:14:38.713 END TEST raid_write_error_test 00:14:38.713 ************************************ 00:14:38.713 18:13:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:14:38.713 00:14:38.713 real 0m4.824s 00:14:38.713 user 0m5.930s 00:14:38.713 sys 0m0.613s 00:14:38.713 18:13:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:38.713 18:13:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.713 18:13:04 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:14:38.713 18:13:04 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:14:38.713 18:13:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:38.713 18:13:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:38.713 18:13:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:38.993 ************************************ 00:14:38.993 START TEST raid_state_function_test 00:14:38.993 ************************************ 00:14:38.993 18:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:14:38.993 18:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:14:38.993 18:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:38.993 18:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:38.993 18:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:38.993 18:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:38.993 18:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:38.993 18:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:38.993 18:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:38.993 18:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:38.993 18:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:38.993 18:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:38.993 18:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:38.993 18:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:38.993 18:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:38.993 18:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:38.993 18:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:38.993 18:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:38.993 18:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:38.993 18:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:38.993 18:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:38.993 18:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:38.993 18:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:38.993 18:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:38.993 18:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:38.993 18:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:14:38.993 18:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:14:38.993 18:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:38.994 18:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:38.994 18:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73371 00:14:38.994 18:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:38.994 Process raid pid: 73371 00:14:38.994 18:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73371' 00:14:38.994 18:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73371 00:14:38.994 18:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73371 ']' 00:14:38.994 18:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.994 18:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:38.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.994 18:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.994 18:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:38.994 18:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.994 [2024-12-06 18:13:04.332223] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:14:38.994 [2024-12-06 18:13:04.332390] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.994 [2024-12-06 18:13:04.506384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.251 [2024-12-06 18:13:04.638012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.510 [2024-12-06 18:13:04.837510] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:39.510 [2024-12-06 18:13:04.837850] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:40.075 18:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:40.075 18:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:40.075 18:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:40.075 18:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.075 18:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.075 [2024-12-06 18:13:05.334794] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:40.075 [2024-12-06 18:13:05.335017] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:40.075 [2024-12-06 18:13:05.335049] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:40.075 [2024-12-06 18:13:05.335072] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:40.075 [2024-12-06 18:13:05.335084] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:40.075 [2024-12-06 18:13:05.335102] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:40.075 [2024-12-06 18:13:05.335114] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:40.075 [2024-12-06 18:13:05.335131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:40.075 18:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.075 18:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:40.075 18:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:40.075 18:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:40.075 18:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:40.075 18:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:40.075 18:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:40.075 18:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.075 18:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.075 18:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.075 18:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.075 18:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.075 18:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.075 18:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:40.075 18:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.075 18:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.075 18:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.075 "name": "Existed_Raid", 00:14:40.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.075 "strip_size_kb": 0, 00:14:40.075 "state": "configuring", 00:14:40.075 "raid_level": "raid1", 00:14:40.075 "superblock": false, 00:14:40.075 "num_base_bdevs": 4, 00:14:40.075 "num_base_bdevs_discovered": 0, 00:14:40.075 "num_base_bdevs_operational": 4, 00:14:40.075 "base_bdevs_list": [ 00:14:40.075 { 00:14:40.075 "name": "BaseBdev1", 00:14:40.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.075 "is_configured": false, 00:14:40.075 "data_offset": 0, 00:14:40.075 "data_size": 0 00:14:40.075 }, 00:14:40.075 { 00:14:40.075 "name": "BaseBdev2", 00:14:40.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.075 "is_configured": false, 00:14:40.075 "data_offset": 0, 00:14:40.075 "data_size": 0 00:14:40.075 }, 00:14:40.075 { 00:14:40.075 "name": "BaseBdev3", 00:14:40.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.075 "is_configured": false, 00:14:40.075 "data_offset": 0, 00:14:40.075 "data_size": 0 00:14:40.075 }, 00:14:40.075 { 00:14:40.075 "name": "BaseBdev4", 00:14:40.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.075 "is_configured": false, 00:14:40.075 "data_offset": 0, 00:14:40.075 "data_size": 0 00:14:40.075 } 00:14:40.075 ] 00:14:40.075 }' 00:14:40.075 18:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.075 18:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.639 18:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:40.639 18:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.639 18:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.639 [2024-12-06 18:13:05.882922] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:40.639 [2024-12-06 18:13:05.883115] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:40.639 18:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.639 18:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:40.640 18:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.640 18:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.640 [2024-12-06 18:13:05.890887] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:40.640 [2024-12-06 18:13:05.890948] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:40.640 [2024-12-06 18:13:05.890966] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:40.640 [2024-12-06 18:13:05.890985] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:40.640 [2024-12-06 18:13:05.890997] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:40.640 [2024-12-06 18:13:05.891019] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:40.640 [2024-12-06 18:13:05.891032] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:40.640 [2024-12-06 18:13:05.891048] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:40.640 18:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.640 18:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:40.640 18:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.640 18:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.640 [2024-12-06 18:13:05.934263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:40.640 BaseBdev1 00:14:40.640 18:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.640 18:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:40.640 18:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:40.640 18:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:40.640 18:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:40.640 18:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:40.640 18:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:40.640 18:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:40.640 18:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.640 18:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.640 18:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.640 18:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:40.640 18:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.640 18:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.640 [ 00:14:40.640 { 00:14:40.640 "name": "BaseBdev1", 00:14:40.640 "aliases": [ 00:14:40.640 "1d75b3f4-8f72-485d-8607-314a533e201e" 00:14:40.640 ], 00:14:40.640 "product_name": "Malloc disk", 00:14:40.640 "block_size": 512, 00:14:40.640 "num_blocks": 65536, 00:14:40.640 "uuid": "1d75b3f4-8f72-485d-8607-314a533e201e", 00:14:40.640 "assigned_rate_limits": { 00:14:40.640 "rw_ios_per_sec": 0, 00:14:40.640 "rw_mbytes_per_sec": 0, 00:14:40.640 "r_mbytes_per_sec": 0, 00:14:40.640 "w_mbytes_per_sec": 0 00:14:40.640 }, 00:14:40.640 "claimed": true, 00:14:40.640 "claim_type": "exclusive_write", 00:14:40.640 "zoned": false, 00:14:40.640 "supported_io_types": { 00:14:40.640 "read": true, 00:14:40.640 "write": true, 00:14:40.640 "unmap": true, 00:14:40.640 "flush": true, 00:14:40.640 "reset": true, 00:14:40.640 "nvme_admin": false, 00:14:40.640 "nvme_io": false, 00:14:40.640 "nvme_io_md": false, 00:14:40.640 "write_zeroes": true, 00:14:40.640 "zcopy": true, 00:14:40.640 "get_zone_info": false, 00:14:40.640 "zone_management": false, 00:14:40.640 "zone_append": false, 00:14:40.640 "compare": false, 00:14:40.640 "compare_and_write": false, 00:14:40.640 "abort": true, 00:14:40.640 "seek_hole": false, 00:14:40.640 "seek_data": false, 00:14:40.640 "copy": true, 00:14:40.640 "nvme_iov_md": false 00:14:40.640 }, 00:14:40.640 "memory_domains": [ 00:14:40.640 { 00:14:40.640 "dma_device_id": "system", 00:14:40.640 "dma_device_type": 1 00:14:40.640 }, 00:14:40.640 { 00:14:40.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.640 "dma_device_type": 2 00:14:40.640 } 00:14:40.640 ], 00:14:40.640 "driver_specific": {} 00:14:40.640 } 00:14:40.640 ] 00:14:40.640 18:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.640 18:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:40.640 18:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:40.640 18:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:40.640 18:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:40.640 18:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:40.640 18:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:40.640 18:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:40.640 18:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.640 18:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.640 18:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.640 18:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.640 18:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.640 18:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:40.640 18:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.640 18:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.640 18:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.640 18:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.640 "name": "Existed_Raid", 00:14:40.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.640 "strip_size_kb": 0, 00:14:40.640 "state": "configuring", 00:14:40.640 "raid_level": "raid1", 00:14:40.640 "superblock": false, 00:14:40.640 "num_base_bdevs": 4, 00:14:40.640 "num_base_bdevs_discovered": 1, 00:14:40.640 "num_base_bdevs_operational": 4, 00:14:40.640 "base_bdevs_list": [ 00:14:40.640 { 00:14:40.640 "name": "BaseBdev1", 00:14:40.640 "uuid": "1d75b3f4-8f72-485d-8607-314a533e201e", 00:14:40.640 "is_configured": true, 00:14:40.640 "data_offset": 0, 00:14:40.640 "data_size": 65536 00:14:40.640 }, 00:14:40.640 { 00:14:40.640 "name": "BaseBdev2", 00:14:40.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.640 "is_configured": false, 00:14:40.640 "data_offset": 0, 00:14:40.640 "data_size": 0 00:14:40.640 }, 00:14:40.640 { 00:14:40.640 "name": "BaseBdev3", 00:14:40.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.640 "is_configured": false, 00:14:40.640 "data_offset": 0, 00:14:40.640 "data_size": 0 00:14:40.640 }, 00:14:40.640 { 00:14:40.640 "name": "BaseBdev4", 00:14:40.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.640 "is_configured": false, 00:14:40.640 "data_offset": 0, 00:14:40.640 "data_size": 0 00:14:40.640 } 00:14:40.640 ] 00:14:40.640 }' 00:14:40.640 18:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.640 18:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.205 18:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:41.205 18:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.205 18:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.205 [2024-12-06 18:13:06.518532] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:41.205 [2024-12-06 18:13:06.518791] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:41.205 18:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.205 18:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:41.205 18:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.205 18:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.205 [2024-12-06 18:13:06.530538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:41.205 [2024-12-06 18:13:06.533143] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:41.205 [2024-12-06 18:13:06.533427] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:41.205 [2024-12-06 18:13:06.533459] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:41.205 [2024-12-06 18:13:06.533484] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:41.205 [2024-12-06 18:13:06.533498] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:41.205 [2024-12-06 18:13:06.533515] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:41.205 18:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.205 18:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:41.205 18:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:41.205 18:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:41.205 18:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:41.205 18:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:41.205 18:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:41.205 18:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:41.205 18:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:41.205 18:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.205 18:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.205 18:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.205 18:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.205 18:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.205 18:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.205 18:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.205 18:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.205 18:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.205 18:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.205 "name": "Existed_Raid", 00:14:41.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.205 "strip_size_kb": 0, 00:14:41.205 "state": "configuring", 00:14:41.205 "raid_level": "raid1", 00:14:41.205 "superblock": false, 00:14:41.205 "num_base_bdevs": 4, 00:14:41.205 "num_base_bdevs_discovered": 1, 00:14:41.205 "num_base_bdevs_operational": 4, 00:14:41.205 "base_bdevs_list": [ 00:14:41.205 { 00:14:41.205 "name": "BaseBdev1", 00:14:41.205 "uuid": "1d75b3f4-8f72-485d-8607-314a533e201e", 00:14:41.205 "is_configured": true, 00:14:41.205 "data_offset": 0, 00:14:41.205 "data_size": 65536 00:14:41.205 }, 00:14:41.205 { 00:14:41.205 "name": "BaseBdev2", 00:14:41.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.205 "is_configured": false, 00:14:41.205 "data_offset": 0, 00:14:41.205 "data_size": 0 00:14:41.205 }, 00:14:41.205 { 00:14:41.205 "name": "BaseBdev3", 00:14:41.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.205 "is_configured": false, 00:14:41.205 "data_offset": 0, 00:14:41.205 "data_size": 0 00:14:41.205 }, 00:14:41.205 { 00:14:41.205 "name": "BaseBdev4", 00:14:41.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.205 "is_configured": false, 00:14:41.205 "data_offset": 0, 00:14:41.205 "data_size": 0 00:14:41.205 } 00:14:41.205 ] 00:14:41.205 }' 00:14:41.205 18:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.205 18:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.773 18:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:41.773 18:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.773 18:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.773 [2024-12-06 18:13:07.088231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:41.773 BaseBdev2 00:14:41.773 18:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.773 18:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:41.773 18:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:41.773 18:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:41.773 18:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:41.773 18:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:41.773 18:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:41.773 18:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:41.773 18:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.773 18:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.773 18:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.773 18:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:41.773 18:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.773 18:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.773 [ 00:14:41.773 { 00:14:41.773 "name": "BaseBdev2", 00:14:41.773 "aliases": [ 00:14:41.773 "96108136-cea6-4dbf-9cfc-82b1ddaa91fe" 00:14:41.773 ], 00:14:41.773 "product_name": "Malloc disk", 00:14:41.773 "block_size": 512, 00:14:41.773 "num_blocks": 65536, 00:14:41.773 "uuid": "96108136-cea6-4dbf-9cfc-82b1ddaa91fe", 00:14:41.773 "assigned_rate_limits": { 00:14:41.773 "rw_ios_per_sec": 0, 00:14:41.773 "rw_mbytes_per_sec": 0, 00:14:41.773 "r_mbytes_per_sec": 0, 00:14:41.773 "w_mbytes_per_sec": 0 00:14:41.773 }, 00:14:41.773 "claimed": true, 00:14:41.773 "claim_type": "exclusive_write", 00:14:41.773 "zoned": false, 00:14:41.773 "supported_io_types": { 00:14:41.773 "read": true, 00:14:41.773 "write": true, 00:14:41.773 "unmap": true, 00:14:41.773 "flush": true, 00:14:41.773 "reset": true, 00:14:41.773 "nvme_admin": false, 00:14:41.773 "nvme_io": false, 00:14:41.773 "nvme_io_md": false, 00:14:41.773 "write_zeroes": true, 00:14:41.773 "zcopy": true, 00:14:41.773 "get_zone_info": false, 00:14:41.773 "zone_management": false, 00:14:41.773 "zone_append": false, 00:14:41.773 "compare": false, 00:14:41.773 "compare_and_write": false, 00:14:41.773 "abort": true, 00:14:41.773 "seek_hole": false, 00:14:41.773 "seek_data": false, 00:14:41.773 "copy": true, 00:14:41.773 "nvme_iov_md": false 00:14:41.773 }, 00:14:41.773 "memory_domains": [ 00:14:41.773 { 00:14:41.773 "dma_device_id": "system", 00:14:41.773 "dma_device_type": 1 00:14:41.773 }, 00:14:41.773 { 00:14:41.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.773 "dma_device_type": 2 00:14:41.773 } 00:14:41.773 ], 00:14:41.773 "driver_specific": {} 00:14:41.773 } 00:14:41.773 ] 00:14:41.773 18:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.773 18:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:41.773 18:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:41.773 18:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:41.773 18:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:41.773 18:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:41.773 18:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:41.773 18:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:41.773 18:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:41.773 18:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:41.773 18:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.773 18:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.773 18:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.773 18:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.773 18:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.773 18:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.773 18:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.773 18:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.773 18:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.773 18:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.773 "name": "Existed_Raid", 00:14:41.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.773 "strip_size_kb": 0, 00:14:41.773 "state": "configuring", 00:14:41.773 "raid_level": "raid1", 00:14:41.773 "superblock": false, 00:14:41.773 "num_base_bdevs": 4, 00:14:41.774 "num_base_bdevs_discovered": 2, 00:14:41.774 "num_base_bdevs_operational": 4, 00:14:41.774 "base_bdevs_list": [ 00:14:41.774 { 00:14:41.774 "name": "BaseBdev1", 00:14:41.774 "uuid": "1d75b3f4-8f72-485d-8607-314a533e201e", 00:14:41.774 "is_configured": true, 00:14:41.774 "data_offset": 0, 00:14:41.774 "data_size": 65536 00:14:41.774 }, 00:14:41.774 { 00:14:41.774 "name": "BaseBdev2", 00:14:41.774 "uuid": "96108136-cea6-4dbf-9cfc-82b1ddaa91fe", 00:14:41.774 "is_configured": true, 00:14:41.774 "data_offset": 0, 00:14:41.774 "data_size": 65536 00:14:41.774 }, 00:14:41.774 { 00:14:41.774 "name": "BaseBdev3", 00:14:41.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.774 "is_configured": false, 00:14:41.774 "data_offset": 0, 00:14:41.774 "data_size": 0 00:14:41.774 }, 00:14:41.774 { 00:14:41.774 "name": "BaseBdev4", 00:14:41.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.774 "is_configured": false, 00:14:41.774 "data_offset": 0, 00:14:41.774 "data_size": 0 00:14:41.774 } 00:14:41.774 ] 00:14:41.774 }' 00:14:41.774 18:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.774 18:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.032 18:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:42.032 18:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.032 18:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.291 [2024-12-06 18:13:07.601946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:42.291 BaseBdev3 00:14:42.291 18:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.291 18:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:42.291 18:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:42.291 18:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:42.291 18:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:42.291 18:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:42.291 18:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:42.291 18:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:42.291 18:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.291 18:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.291 18:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.291 18:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:42.291 18:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.291 18:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.291 [ 00:14:42.291 { 00:14:42.291 "name": "BaseBdev3", 00:14:42.291 "aliases": [ 00:14:42.291 "eaab69df-07b5-429a-9350-95f3544625aa" 00:14:42.291 ], 00:14:42.291 "product_name": "Malloc disk", 00:14:42.291 "block_size": 512, 00:14:42.291 "num_blocks": 65536, 00:14:42.291 "uuid": "eaab69df-07b5-429a-9350-95f3544625aa", 00:14:42.291 "assigned_rate_limits": { 00:14:42.291 "rw_ios_per_sec": 0, 00:14:42.291 "rw_mbytes_per_sec": 0, 00:14:42.291 "r_mbytes_per_sec": 0, 00:14:42.291 "w_mbytes_per_sec": 0 00:14:42.291 }, 00:14:42.291 "claimed": true, 00:14:42.291 "claim_type": "exclusive_write", 00:14:42.291 "zoned": false, 00:14:42.291 "supported_io_types": { 00:14:42.291 "read": true, 00:14:42.291 "write": true, 00:14:42.291 "unmap": true, 00:14:42.291 "flush": true, 00:14:42.291 "reset": true, 00:14:42.291 "nvme_admin": false, 00:14:42.291 "nvme_io": false, 00:14:42.291 "nvme_io_md": false, 00:14:42.291 "write_zeroes": true, 00:14:42.291 "zcopy": true, 00:14:42.291 "get_zone_info": false, 00:14:42.291 "zone_management": false, 00:14:42.291 "zone_append": false, 00:14:42.291 "compare": false, 00:14:42.291 "compare_and_write": false, 00:14:42.291 "abort": true, 00:14:42.291 "seek_hole": false, 00:14:42.291 "seek_data": false, 00:14:42.291 "copy": true, 00:14:42.291 "nvme_iov_md": false 00:14:42.291 }, 00:14:42.291 "memory_domains": [ 00:14:42.291 { 00:14:42.291 "dma_device_id": "system", 00:14:42.291 "dma_device_type": 1 00:14:42.291 }, 00:14:42.291 { 00:14:42.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:42.291 "dma_device_type": 2 00:14:42.291 } 00:14:42.291 ], 00:14:42.291 "driver_specific": {} 00:14:42.291 } 00:14:42.291 ] 00:14:42.291 18:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.291 18:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:42.291 18:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:42.291 18:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:42.291 18:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:42.291 18:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:42.291 18:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:42.291 18:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:42.291 18:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:42.291 18:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:42.291 18:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.291 18:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.291 18:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.291 18:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.291 18:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.291 18:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.291 18:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.291 18:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.291 18:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.291 18:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.291 "name": "Existed_Raid", 00:14:42.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.291 "strip_size_kb": 0, 00:14:42.291 "state": "configuring", 00:14:42.291 "raid_level": "raid1", 00:14:42.291 "superblock": false, 00:14:42.291 "num_base_bdevs": 4, 00:14:42.291 "num_base_bdevs_discovered": 3, 00:14:42.291 "num_base_bdevs_operational": 4, 00:14:42.291 "base_bdevs_list": [ 00:14:42.291 { 00:14:42.291 "name": "BaseBdev1", 00:14:42.291 "uuid": "1d75b3f4-8f72-485d-8607-314a533e201e", 00:14:42.291 "is_configured": true, 00:14:42.291 "data_offset": 0, 00:14:42.291 "data_size": 65536 00:14:42.291 }, 00:14:42.291 { 00:14:42.291 "name": "BaseBdev2", 00:14:42.291 "uuid": "96108136-cea6-4dbf-9cfc-82b1ddaa91fe", 00:14:42.291 "is_configured": true, 00:14:42.291 "data_offset": 0, 00:14:42.291 "data_size": 65536 00:14:42.291 }, 00:14:42.291 { 00:14:42.291 "name": "BaseBdev3", 00:14:42.291 "uuid": "eaab69df-07b5-429a-9350-95f3544625aa", 00:14:42.291 "is_configured": true, 00:14:42.291 "data_offset": 0, 00:14:42.291 "data_size": 65536 00:14:42.291 }, 00:14:42.291 { 00:14:42.291 "name": "BaseBdev4", 00:14:42.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.291 "is_configured": false, 00:14:42.291 "data_offset": 0, 00:14:42.291 "data_size": 0 00:14:42.291 } 00:14:42.291 ] 00:14:42.291 }' 00:14:42.291 18:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.291 18:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.858 18:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:42.858 18:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.858 18:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.858 [2024-12-06 18:13:08.168316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:42.858 [2024-12-06 18:13:08.168404] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:42.858 [2024-12-06 18:13:08.168418] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:42.858 [2024-12-06 18:13:08.168741] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:42.858 [2024-12-06 18:13:08.169052] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:42.858 [2024-12-06 18:13:08.169078] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:42.858 [2024-12-06 18:13:08.169444] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.858 BaseBdev4 00:14:42.858 18:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.858 18:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:42.858 18:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:42.858 18:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:42.859 18:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:42.859 18:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:42.859 18:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:42.859 18:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:42.859 18:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.859 18:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.859 18:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.859 18:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:42.859 18:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.859 18:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.859 [ 00:14:42.859 { 00:14:42.859 "name": "BaseBdev4", 00:14:42.859 "aliases": [ 00:14:42.859 "0e9d97cd-711d-40ba-a738-cfa119a30af3" 00:14:42.859 ], 00:14:42.859 "product_name": "Malloc disk", 00:14:42.859 "block_size": 512, 00:14:42.859 "num_blocks": 65536, 00:14:42.859 "uuid": "0e9d97cd-711d-40ba-a738-cfa119a30af3", 00:14:42.859 "assigned_rate_limits": { 00:14:42.859 "rw_ios_per_sec": 0, 00:14:42.859 "rw_mbytes_per_sec": 0, 00:14:42.859 "r_mbytes_per_sec": 0, 00:14:42.859 "w_mbytes_per_sec": 0 00:14:42.859 }, 00:14:42.859 "claimed": true, 00:14:42.859 "claim_type": "exclusive_write", 00:14:42.859 "zoned": false, 00:14:42.859 "supported_io_types": { 00:14:42.859 "read": true, 00:14:42.859 "write": true, 00:14:42.859 "unmap": true, 00:14:42.859 "flush": true, 00:14:42.859 "reset": true, 00:14:42.859 "nvme_admin": false, 00:14:42.859 "nvme_io": false, 00:14:42.859 "nvme_io_md": false, 00:14:42.859 "write_zeroes": true, 00:14:42.859 "zcopy": true, 00:14:42.859 "get_zone_info": false, 00:14:42.859 "zone_management": false, 00:14:42.859 "zone_append": false, 00:14:42.859 "compare": false, 00:14:42.859 "compare_and_write": false, 00:14:42.859 "abort": true, 00:14:42.859 "seek_hole": false, 00:14:42.859 "seek_data": false, 00:14:42.859 "copy": true, 00:14:42.859 "nvme_iov_md": false 00:14:42.859 }, 00:14:42.859 "memory_domains": [ 00:14:42.859 { 00:14:42.859 "dma_device_id": "system", 00:14:42.859 "dma_device_type": 1 00:14:42.859 }, 00:14:42.859 { 00:14:42.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:42.859 "dma_device_type": 2 00:14:42.859 } 00:14:42.859 ], 00:14:42.859 "driver_specific": {} 00:14:42.859 } 00:14:42.859 ] 00:14:42.859 18:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.859 18:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:42.859 18:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:42.859 18:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:42.859 18:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:14:42.859 18:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:42.859 18:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:42.859 18:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:42.859 18:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:42.859 18:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:42.859 18:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.859 18:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.859 18:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.859 18:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.859 18:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.859 18:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.859 18:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.859 18:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.859 18:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.859 18:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.859 "name": "Existed_Raid", 00:14:42.859 "uuid": "f392a6a4-2324-4bd5-956f-5cc4f69a450d", 00:14:42.859 "strip_size_kb": 0, 00:14:42.859 "state": "online", 00:14:42.859 "raid_level": "raid1", 00:14:42.859 "superblock": false, 00:14:42.859 "num_base_bdevs": 4, 00:14:42.859 "num_base_bdevs_discovered": 4, 00:14:42.859 "num_base_bdevs_operational": 4, 00:14:42.859 "base_bdevs_list": [ 00:14:42.859 { 00:14:42.859 "name": "BaseBdev1", 00:14:42.859 "uuid": "1d75b3f4-8f72-485d-8607-314a533e201e", 00:14:42.859 "is_configured": true, 00:14:42.859 "data_offset": 0, 00:14:42.859 "data_size": 65536 00:14:42.859 }, 00:14:42.859 { 00:14:42.859 "name": "BaseBdev2", 00:14:42.859 "uuid": "96108136-cea6-4dbf-9cfc-82b1ddaa91fe", 00:14:42.859 "is_configured": true, 00:14:42.859 "data_offset": 0, 00:14:42.859 "data_size": 65536 00:14:42.859 }, 00:14:42.859 { 00:14:42.859 "name": "BaseBdev3", 00:14:42.859 "uuid": "eaab69df-07b5-429a-9350-95f3544625aa", 00:14:42.859 "is_configured": true, 00:14:42.859 "data_offset": 0, 00:14:42.859 "data_size": 65536 00:14:42.859 }, 00:14:42.859 { 00:14:42.859 "name": "BaseBdev4", 00:14:42.859 "uuid": "0e9d97cd-711d-40ba-a738-cfa119a30af3", 00:14:42.859 "is_configured": true, 00:14:42.859 "data_offset": 0, 00:14:42.859 "data_size": 65536 00:14:42.859 } 00:14:42.859 ] 00:14:42.859 }' 00:14:42.859 18:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.859 18:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.477 18:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:43.477 18:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:43.477 18:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:43.477 18:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:43.477 18:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:43.477 18:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:43.477 18:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:43.477 18:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:43.477 18:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.477 18:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.477 [2024-12-06 18:13:08.729026] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:43.477 18:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.477 18:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:43.477 "name": "Existed_Raid", 00:14:43.477 "aliases": [ 00:14:43.477 "f392a6a4-2324-4bd5-956f-5cc4f69a450d" 00:14:43.477 ], 00:14:43.477 "product_name": "Raid Volume", 00:14:43.477 "block_size": 512, 00:14:43.477 "num_blocks": 65536, 00:14:43.477 "uuid": "f392a6a4-2324-4bd5-956f-5cc4f69a450d", 00:14:43.477 "assigned_rate_limits": { 00:14:43.477 "rw_ios_per_sec": 0, 00:14:43.477 "rw_mbytes_per_sec": 0, 00:14:43.477 "r_mbytes_per_sec": 0, 00:14:43.477 "w_mbytes_per_sec": 0 00:14:43.477 }, 00:14:43.477 "claimed": false, 00:14:43.477 "zoned": false, 00:14:43.477 "supported_io_types": { 00:14:43.477 "read": true, 00:14:43.477 "write": true, 00:14:43.477 "unmap": false, 00:14:43.477 "flush": false, 00:14:43.477 "reset": true, 00:14:43.477 "nvme_admin": false, 00:14:43.477 "nvme_io": false, 00:14:43.477 "nvme_io_md": false, 00:14:43.477 "write_zeroes": true, 00:14:43.477 "zcopy": false, 00:14:43.477 "get_zone_info": false, 00:14:43.477 "zone_management": false, 00:14:43.477 "zone_append": false, 00:14:43.477 "compare": false, 00:14:43.477 "compare_and_write": false, 00:14:43.477 "abort": false, 00:14:43.477 "seek_hole": false, 00:14:43.477 "seek_data": false, 00:14:43.477 "copy": false, 00:14:43.477 "nvme_iov_md": false 00:14:43.477 }, 00:14:43.477 "memory_domains": [ 00:14:43.477 { 00:14:43.477 "dma_device_id": "system", 00:14:43.477 "dma_device_type": 1 00:14:43.477 }, 00:14:43.477 { 00:14:43.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.477 "dma_device_type": 2 00:14:43.477 }, 00:14:43.477 { 00:14:43.477 "dma_device_id": "system", 00:14:43.477 "dma_device_type": 1 00:14:43.477 }, 00:14:43.477 { 00:14:43.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.477 "dma_device_type": 2 00:14:43.477 }, 00:14:43.477 { 00:14:43.477 "dma_device_id": "system", 00:14:43.477 "dma_device_type": 1 00:14:43.477 }, 00:14:43.477 { 00:14:43.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.477 "dma_device_type": 2 00:14:43.477 }, 00:14:43.477 { 00:14:43.477 "dma_device_id": "system", 00:14:43.477 "dma_device_type": 1 00:14:43.477 }, 00:14:43.477 { 00:14:43.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.477 "dma_device_type": 2 00:14:43.477 } 00:14:43.477 ], 00:14:43.477 "driver_specific": { 00:14:43.477 "raid": { 00:14:43.477 "uuid": "f392a6a4-2324-4bd5-956f-5cc4f69a450d", 00:14:43.477 "strip_size_kb": 0, 00:14:43.477 "state": "online", 00:14:43.477 "raid_level": "raid1", 00:14:43.477 "superblock": false, 00:14:43.477 "num_base_bdevs": 4, 00:14:43.477 "num_base_bdevs_discovered": 4, 00:14:43.477 "num_base_bdevs_operational": 4, 00:14:43.477 "base_bdevs_list": [ 00:14:43.477 { 00:14:43.477 "name": "BaseBdev1", 00:14:43.477 "uuid": "1d75b3f4-8f72-485d-8607-314a533e201e", 00:14:43.477 "is_configured": true, 00:14:43.477 "data_offset": 0, 00:14:43.477 "data_size": 65536 00:14:43.477 }, 00:14:43.477 { 00:14:43.477 "name": "BaseBdev2", 00:14:43.477 "uuid": "96108136-cea6-4dbf-9cfc-82b1ddaa91fe", 00:14:43.477 "is_configured": true, 00:14:43.477 "data_offset": 0, 00:14:43.477 "data_size": 65536 00:14:43.477 }, 00:14:43.477 { 00:14:43.477 "name": "BaseBdev3", 00:14:43.477 "uuid": "eaab69df-07b5-429a-9350-95f3544625aa", 00:14:43.477 "is_configured": true, 00:14:43.477 "data_offset": 0, 00:14:43.477 "data_size": 65536 00:14:43.477 }, 00:14:43.477 { 00:14:43.477 "name": "BaseBdev4", 00:14:43.477 "uuid": "0e9d97cd-711d-40ba-a738-cfa119a30af3", 00:14:43.477 "is_configured": true, 00:14:43.477 "data_offset": 0, 00:14:43.477 "data_size": 65536 00:14:43.477 } 00:14:43.477 ] 00:14:43.477 } 00:14:43.477 } 00:14:43.477 }' 00:14:43.477 18:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:43.477 18:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:43.477 BaseBdev2 00:14:43.477 BaseBdev3 00:14:43.477 BaseBdev4' 00:14:43.477 18:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:43.477 18:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:43.477 18:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:43.477 18:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:43.477 18:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:43.477 18:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.477 18:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.477 18:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.477 18:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:43.477 18:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:43.477 18:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:43.478 18:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:43.478 18:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.478 18:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.478 18:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:43.478 18:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.478 18:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:43.478 18:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:43.478 18:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:43.478 18:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:43.478 18:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.478 18:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.478 18:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:43.737 18:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.737 18:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:43.737 18:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:43.738 18:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:43.738 18:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:43.738 18:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:43.738 18:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.738 18:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.738 18:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.738 18:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:43.738 18:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:43.738 18:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:43.738 18:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.738 18:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.738 [2024-12-06 18:13:09.104762] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:43.738 18:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.738 18:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:43.738 18:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:14:43.738 18:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:43.738 18:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:43.738 18:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:43.738 18:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:43.738 18:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:43.738 18:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:43.738 18:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:43.738 18:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:43.738 18:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:43.738 18:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.738 18:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.738 18:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.738 18:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.738 18:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.738 18:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.738 18:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:43.738 18:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.738 18:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.738 18:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.738 "name": "Existed_Raid", 00:14:43.738 "uuid": "f392a6a4-2324-4bd5-956f-5cc4f69a450d", 00:14:43.738 "strip_size_kb": 0, 00:14:43.738 "state": "online", 00:14:43.738 "raid_level": "raid1", 00:14:43.738 "superblock": false, 00:14:43.738 "num_base_bdevs": 4, 00:14:43.738 "num_base_bdevs_discovered": 3, 00:14:43.738 "num_base_bdevs_operational": 3, 00:14:43.738 "base_bdevs_list": [ 00:14:43.738 { 00:14:43.738 "name": null, 00:14:43.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.738 "is_configured": false, 00:14:43.738 "data_offset": 0, 00:14:43.738 "data_size": 65536 00:14:43.738 }, 00:14:43.738 { 00:14:43.738 "name": "BaseBdev2", 00:14:43.738 "uuid": "96108136-cea6-4dbf-9cfc-82b1ddaa91fe", 00:14:43.738 "is_configured": true, 00:14:43.738 "data_offset": 0, 00:14:43.738 "data_size": 65536 00:14:43.738 }, 00:14:43.738 { 00:14:43.738 "name": "BaseBdev3", 00:14:43.738 "uuid": "eaab69df-07b5-429a-9350-95f3544625aa", 00:14:43.738 "is_configured": true, 00:14:43.738 "data_offset": 0, 00:14:43.738 "data_size": 65536 00:14:43.738 }, 00:14:43.738 { 00:14:43.738 "name": "BaseBdev4", 00:14:43.738 "uuid": "0e9d97cd-711d-40ba-a738-cfa119a30af3", 00:14:43.738 "is_configured": true, 00:14:43.738 "data_offset": 0, 00:14:43.738 "data_size": 65536 00:14:43.738 } 00:14:43.738 ] 00:14:43.738 }' 00:14:43.738 18:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.738 18:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.304 18:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:44.305 18:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:44.305 18:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.305 18:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:44.305 18:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.305 18:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.305 18:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.305 18:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:44.305 18:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:44.305 18:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:44.305 18:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.305 18:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.305 [2024-12-06 18:13:09.748238] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:44.564 18:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.564 18:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:44.564 18:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:44.564 18:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.564 18:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:44.564 18:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.564 18:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.564 18:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.564 18:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:44.564 18:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:44.564 18:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:44.564 18:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.564 18:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.564 [2024-12-06 18:13:09.893759] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:44.564 18:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.564 18:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:44.564 18:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:44.564 18:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.564 18:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.564 18:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.564 18:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:44.564 18:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.564 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:44.564 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:44.564 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:44.564 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.564 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.564 [2024-12-06 18:13:10.040277] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:44.564 [2024-12-06 18:13:10.040393] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:44.824 [2024-12-06 18:13:10.127864] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:44.824 [2024-12-06 18:13:10.127920] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:44.824 [2024-12-06 18:13:10.127941] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:44.824 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.824 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:44.824 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:44.824 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.824 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:44.824 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.824 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.824 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.824 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:44.824 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:44.824 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:44.824 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:44.824 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:44.824 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:44.824 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.824 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.824 BaseBdev2 00:14:44.824 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.824 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:44.824 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:44.824 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:44.824 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:44.824 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:44.824 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:44.824 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:44.824 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.824 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.824 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.824 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:44.824 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.824 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.824 [ 00:14:44.824 { 00:14:44.824 "name": "BaseBdev2", 00:14:44.824 "aliases": [ 00:14:44.824 "19561c1c-7634-4518-af16-94ccb338f994" 00:14:44.824 ], 00:14:44.824 "product_name": "Malloc disk", 00:14:44.824 "block_size": 512, 00:14:44.824 "num_blocks": 65536, 00:14:44.824 "uuid": "19561c1c-7634-4518-af16-94ccb338f994", 00:14:44.824 "assigned_rate_limits": { 00:14:44.824 "rw_ios_per_sec": 0, 00:14:44.824 "rw_mbytes_per_sec": 0, 00:14:44.824 "r_mbytes_per_sec": 0, 00:14:44.824 "w_mbytes_per_sec": 0 00:14:44.824 }, 00:14:44.824 "claimed": false, 00:14:44.824 "zoned": false, 00:14:44.824 "supported_io_types": { 00:14:44.824 "read": true, 00:14:44.824 "write": true, 00:14:44.824 "unmap": true, 00:14:44.824 "flush": true, 00:14:44.824 "reset": true, 00:14:44.824 "nvme_admin": false, 00:14:44.824 "nvme_io": false, 00:14:44.824 "nvme_io_md": false, 00:14:44.824 "write_zeroes": true, 00:14:44.824 "zcopy": true, 00:14:44.824 "get_zone_info": false, 00:14:44.824 "zone_management": false, 00:14:44.824 "zone_append": false, 00:14:44.824 "compare": false, 00:14:44.824 "compare_and_write": false, 00:14:44.824 "abort": true, 00:14:44.824 "seek_hole": false, 00:14:44.824 "seek_data": false, 00:14:44.824 "copy": true, 00:14:44.824 "nvme_iov_md": false 00:14:44.824 }, 00:14:44.824 "memory_domains": [ 00:14:44.824 { 00:14:44.824 "dma_device_id": "system", 00:14:44.824 "dma_device_type": 1 00:14:44.824 }, 00:14:44.825 { 00:14:44.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.825 "dma_device_type": 2 00:14:44.825 } 00:14:44.825 ], 00:14:44.825 "driver_specific": {} 00:14:44.825 } 00:14:44.825 ] 00:14:44.825 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.825 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:44.825 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:44.825 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:44.825 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:44.825 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.825 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.825 BaseBdev3 00:14:44.825 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.825 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:44.825 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:44.825 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:44.825 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:44.825 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:44.825 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:44.825 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:44.825 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.825 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.825 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.825 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:44.825 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.825 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.825 [ 00:14:44.825 { 00:14:44.825 "name": "BaseBdev3", 00:14:44.825 "aliases": [ 00:14:44.825 "17debe14-c790-4452-9b9e-c9bb97d952b5" 00:14:44.825 ], 00:14:44.825 "product_name": "Malloc disk", 00:14:44.825 "block_size": 512, 00:14:44.825 "num_blocks": 65536, 00:14:44.825 "uuid": "17debe14-c790-4452-9b9e-c9bb97d952b5", 00:14:44.825 "assigned_rate_limits": { 00:14:44.825 "rw_ios_per_sec": 0, 00:14:44.825 "rw_mbytes_per_sec": 0, 00:14:44.825 "r_mbytes_per_sec": 0, 00:14:44.825 "w_mbytes_per_sec": 0 00:14:44.825 }, 00:14:44.825 "claimed": false, 00:14:44.825 "zoned": false, 00:14:44.825 "supported_io_types": { 00:14:44.825 "read": true, 00:14:44.825 "write": true, 00:14:44.825 "unmap": true, 00:14:44.825 "flush": true, 00:14:44.825 "reset": true, 00:14:44.825 "nvme_admin": false, 00:14:44.825 "nvme_io": false, 00:14:44.825 "nvme_io_md": false, 00:14:44.825 "write_zeroes": true, 00:14:44.825 "zcopy": true, 00:14:44.825 "get_zone_info": false, 00:14:44.825 "zone_management": false, 00:14:44.825 "zone_append": false, 00:14:44.825 "compare": false, 00:14:44.825 "compare_and_write": false, 00:14:44.825 "abort": true, 00:14:44.825 "seek_hole": false, 00:14:44.825 "seek_data": false, 00:14:44.825 "copy": true, 00:14:44.825 "nvme_iov_md": false 00:14:44.825 }, 00:14:44.825 "memory_domains": [ 00:14:44.825 { 00:14:44.825 "dma_device_id": "system", 00:14:44.825 "dma_device_type": 1 00:14:44.825 }, 00:14:44.825 { 00:14:44.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.825 "dma_device_type": 2 00:14:44.825 } 00:14:44.825 ], 00:14:44.825 "driver_specific": {} 00:14:44.825 } 00:14:44.825 ] 00:14:44.825 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.825 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:44.825 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:44.825 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:44.825 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:44.825 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.825 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.085 BaseBdev4 00:14:45.085 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.085 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:45.085 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:45.085 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:45.085 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:45.085 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:45.085 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:45.085 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:45.085 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.085 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.085 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.085 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:45.085 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.085 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.085 [ 00:14:45.085 { 00:14:45.085 "name": "BaseBdev4", 00:14:45.085 "aliases": [ 00:14:45.085 "4573aaba-90c1-4c08-9fa0-a0980652b1cf" 00:14:45.085 ], 00:14:45.085 "product_name": "Malloc disk", 00:14:45.085 "block_size": 512, 00:14:45.085 "num_blocks": 65536, 00:14:45.085 "uuid": "4573aaba-90c1-4c08-9fa0-a0980652b1cf", 00:14:45.085 "assigned_rate_limits": { 00:14:45.085 "rw_ios_per_sec": 0, 00:14:45.085 "rw_mbytes_per_sec": 0, 00:14:45.085 "r_mbytes_per_sec": 0, 00:14:45.085 "w_mbytes_per_sec": 0 00:14:45.085 }, 00:14:45.085 "claimed": false, 00:14:45.085 "zoned": false, 00:14:45.085 "supported_io_types": { 00:14:45.085 "read": true, 00:14:45.085 "write": true, 00:14:45.085 "unmap": true, 00:14:45.085 "flush": true, 00:14:45.085 "reset": true, 00:14:45.085 "nvme_admin": false, 00:14:45.085 "nvme_io": false, 00:14:45.085 "nvme_io_md": false, 00:14:45.085 "write_zeroes": true, 00:14:45.085 "zcopy": true, 00:14:45.085 "get_zone_info": false, 00:14:45.085 "zone_management": false, 00:14:45.085 "zone_append": false, 00:14:45.085 "compare": false, 00:14:45.085 "compare_and_write": false, 00:14:45.085 "abort": true, 00:14:45.085 "seek_hole": false, 00:14:45.085 "seek_data": false, 00:14:45.085 "copy": true, 00:14:45.085 "nvme_iov_md": false 00:14:45.085 }, 00:14:45.085 "memory_domains": [ 00:14:45.085 { 00:14:45.085 "dma_device_id": "system", 00:14:45.085 "dma_device_type": 1 00:14:45.085 }, 00:14:45.085 { 00:14:45.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.085 "dma_device_type": 2 00:14:45.085 } 00:14:45.085 ], 00:14:45.085 "driver_specific": {} 00:14:45.085 } 00:14:45.085 ] 00:14:45.085 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.085 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:45.085 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:45.085 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:45.085 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:45.085 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.085 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.085 [2024-12-06 18:13:10.407985] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:45.085 [2024-12-06 18:13:10.408183] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:45.085 [2024-12-06 18:13:10.408351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:45.085 [2024-12-06 18:13:10.410832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:45.085 [2024-12-06 18:13:10.411027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:45.085 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.085 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:45.085 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:45.085 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:45.085 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:45.085 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:45.085 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:45.085 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.085 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.085 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.085 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.085 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.085 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.085 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.085 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.085 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.085 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.085 "name": "Existed_Raid", 00:14:45.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.085 "strip_size_kb": 0, 00:14:45.085 "state": "configuring", 00:14:45.085 "raid_level": "raid1", 00:14:45.085 "superblock": false, 00:14:45.085 "num_base_bdevs": 4, 00:14:45.085 "num_base_bdevs_discovered": 3, 00:14:45.085 "num_base_bdevs_operational": 4, 00:14:45.085 "base_bdevs_list": [ 00:14:45.085 { 00:14:45.085 "name": "BaseBdev1", 00:14:45.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.085 "is_configured": false, 00:14:45.085 "data_offset": 0, 00:14:45.085 "data_size": 0 00:14:45.085 }, 00:14:45.085 { 00:14:45.085 "name": "BaseBdev2", 00:14:45.085 "uuid": "19561c1c-7634-4518-af16-94ccb338f994", 00:14:45.085 "is_configured": true, 00:14:45.085 "data_offset": 0, 00:14:45.085 "data_size": 65536 00:14:45.085 }, 00:14:45.085 { 00:14:45.085 "name": "BaseBdev3", 00:14:45.085 "uuid": "17debe14-c790-4452-9b9e-c9bb97d952b5", 00:14:45.085 "is_configured": true, 00:14:45.085 "data_offset": 0, 00:14:45.085 "data_size": 65536 00:14:45.085 }, 00:14:45.085 { 00:14:45.085 "name": "BaseBdev4", 00:14:45.085 "uuid": "4573aaba-90c1-4c08-9fa0-a0980652b1cf", 00:14:45.085 "is_configured": true, 00:14:45.085 "data_offset": 0, 00:14:45.085 "data_size": 65536 00:14:45.085 } 00:14:45.085 ] 00:14:45.085 }' 00:14:45.085 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.085 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.654 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:45.654 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.654 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.654 [2024-12-06 18:13:10.964184] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:45.654 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.654 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:45.654 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:45.654 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:45.654 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:45.654 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:45.654 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:45.654 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.654 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.654 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.654 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.654 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.654 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.654 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.654 18:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.654 18:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.654 18:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.654 "name": "Existed_Raid", 00:14:45.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.654 "strip_size_kb": 0, 00:14:45.654 "state": "configuring", 00:14:45.654 "raid_level": "raid1", 00:14:45.654 "superblock": false, 00:14:45.654 "num_base_bdevs": 4, 00:14:45.654 "num_base_bdevs_discovered": 2, 00:14:45.654 "num_base_bdevs_operational": 4, 00:14:45.654 "base_bdevs_list": [ 00:14:45.654 { 00:14:45.654 "name": "BaseBdev1", 00:14:45.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.654 "is_configured": false, 00:14:45.654 "data_offset": 0, 00:14:45.654 "data_size": 0 00:14:45.654 }, 00:14:45.654 { 00:14:45.654 "name": null, 00:14:45.654 "uuid": "19561c1c-7634-4518-af16-94ccb338f994", 00:14:45.654 "is_configured": false, 00:14:45.654 "data_offset": 0, 00:14:45.654 "data_size": 65536 00:14:45.654 }, 00:14:45.654 { 00:14:45.654 "name": "BaseBdev3", 00:14:45.654 "uuid": "17debe14-c790-4452-9b9e-c9bb97d952b5", 00:14:45.654 "is_configured": true, 00:14:45.654 "data_offset": 0, 00:14:45.654 "data_size": 65536 00:14:45.654 }, 00:14:45.654 { 00:14:45.654 "name": "BaseBdev4", 00:14:45.654 "uuid": "4573aaba-90c1-4c08-9fa0-a0980652b1cf", 00:14:45.654 "is_configured": true, 00:14:45.654 "data_offset": 0, 00:14:45.654 "data_size": 65536 00:14:45.654 } 00:14:45.654 ] 00:14:45.654 }' 00:14:45.654 18:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.654 18:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.223 18:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:46.223 18:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.223 18:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.223 18:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.223 18:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.223 18:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:46.223 18:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:46.223 18:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.223 18:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.223 [2024-12-06 18:13:11.558722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:46.223 BaseBdev1 00:14:46.223 18:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.223 18:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:46.223 18:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:46.223 18:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:46.223 18:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:46.223 18:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:46.223 18:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:46.223 18:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:46.223 18:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.223 18:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.223 18:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.223 18:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:46.223 18:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.223 18:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.223 [ 00:14:46.223 { 00:14:46.223 "name": "BaseBdev1", 00:14:46.223 "aliases": [ 00:14:46.223 "4f50d3c2-d562-4e93-be05-9a51a5c5d167" 00:14:46.223 ], 00:14:46.223 "product_name": "Malloc disk", 00:14:46.223 "block_size": 512, 00:14:46.223 "num_blocks": 65536, 00:14:46.223 "uuid": "4f50d3c2-d562-4e93-be05-9a51a5c5d167", 00:14:46.223 "assigned_rate_limits": { 00:14:46.223 "rw_ios_per_sec": 0, 00:14:46.223 "rw_mbytes_per_sec": 0, 00:14:46.223 "r_mbytes_per_sec": 0, 00:14:46.223 "w_mbytes_per_sec": 0 00:14:46.223 }, 00:14:46.223 "claimed": true, 00:14:46.223 "claim_type": "exclusive_write", 00:14:46.223 "zoned": false, 00:14:46.223 "supported_io_types": { 00:14:46.223 "read": true, 00:14:46.223 "write": true, 00:14:46.223 "unmap": true, 00:14:46.223 "flush": true, 00:14:46.223 "reset": true, 00:14:46.223 "nvme_admin": false, 00:14:46.223 "nvme_io": false, 00:14:46.223 "nvme_io_md": false, 00:14:46.223 "write_zeroes": true, 00:14:46.223 "zcopy": true, 00:14:46.223 "get_zone_info": false, 00:14:46.223 "zone_management": false, 00:14:46.223 "zone_append": false, 00:14:46.223 "compare": false, 00:14:46.223 "compare_and_write": false, 00:14:46.223 "abort": true, 00:14:46.223 "seek_hole": false, 00:14:46.223 "seek_data": false, 00:14:46.223 "copy": true, 00:14:46.223 "nvme_iov_md": false 00:14:46.223 }, 00:14:46.223 "memory_domains": [ 00:14:46.223 { 00:14:46.223 "dma_device_id": "system", 00:14:46.223 "dma_device_type": 1 00:14:46.223 }, 00:14:46.223 { 00:14:46.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.223 "dma_device_type": 2 00:14:46.223 } 00:14:46.223 ], 00:14:46.223 "driver_specific": {} 00:14:46.223 } 00:14:46.223 ] 00:14:46.224 18:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.224 18:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:46.224 18:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:46.224 18:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:46.224 18:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:46.224 18:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:46.224 18:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:46.224 18:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:46.224 18:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.224 18:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.224 18:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.224 18:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.224 18:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.224 18:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.224 18:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.224 18:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.224 18:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.224 18:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.224 "name": "Existed_Raid", 00:14:46.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.224 "strip_size_kb": 0, 00:14:46.224 "state": "configuring", 00:14:46.224 "raid_level": "raid1", 00:14:46.224 "superblock": false, 00:14:46.224 "num_base_bdevs": 4, 00:14:46.224 "num_base_bdevs_discovered": 3, 00:14:46.224 "num_base_bdevs_operational": 4, 00:14:46.224 "base_bdevs_list": [ 00:14:46.224 { 00:14:46.224 "name": "BaseBdev1", 00:14:46.224 "uuid": "4f50d3c2-d562-4e93-be05-9a51a5c5d167", 00:14:46.224 "is_configured": true, 00:14:46.224 "data_offset": 0, 00:14:46.224 "data_size": 65536 00:14:46.224 }, 00:14:46.224 { 00:14:46.224 "name": null, 00:14:46.224 "uuid": "19561c1c-7634-4518-af16-94ccb338f994", 00:14:46.224 "is_configured": false, 00:14:46.224 "data_offset": 0, 00:14:46.224 "data_size": 65536 00:14:46.224 }, 00:14:46.224 { 00:14:46.224 "name": "BaseBdev3", 00:14:46.224 "uuid": "17debe14-c790-4452-9b9e-c9bb97d952b5", 00:14:46.224 "is_configured": true, 00:14:46.224 "data_offset": 0, 00:14:46.224 "data_size": 65536 00:14:46.224 }, 00:14:46.224 { 00:14:46.224 "name": "BaseBdev4", 00:14:46.224 "uuid": "4573aaba-90c1-4c08-9fa0-a0980652b1cf", 00:14:46.224 "is_configured": true, 00:14:46.224 "data_offset": 0, 00:14:46.224 "data_size": 65536 00:14:46.224 } 00:14:46.224 ] 00:14:46.224 }' 00:14:46.224 18:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.224 18:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.791 18:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:46.791 18:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.791 18:13:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.791 18:13:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.791 18:13:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.791 18:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:46.791 18:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:46.791 18:13:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.791 18:13:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.791 [2024-12-06 18:13:12.150928] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:46.791 18:13:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.791 18:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:46.791 18:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:46.791 18:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:46.791 18:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:46.791 18:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:46.792 18:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:46.792 18:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.792 18:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.792 18:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.792 18:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.792 18:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.792 18:13:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.792 18:13:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.792 18:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.792 18:13:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.792 18:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.792 "name": "Existed_Raid", 00:14:46.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.792 "strip_size_kb": 0, 00:14:46.792 "state": "configuring", 00:14:46.792 "raid_level": "raid1", 00:14:46.792 "superblock": false, 00:14:46.792 "num_base_bdevs": 4, 00:14:46.792 "num_base_bdevs_discovered": 2, 00:14:46.792 "num_base_bdevs_operational": 4, 00:14:46.792 "base_bdevs_list": [ 00:14:46.792 { 00:14:46.792 "name": "BaseBdev1", 00:14:46.792 "uuid": "4f50d3c2-d562-4e93-be05-9a51a5c5d167", 00:14:46.792 "is_configured": true, 00:14:46.792 "data_offset": 0, 00:14:46.792 "data_size": 65536 00:14:46.792 }, 00:14:46.792 { 00:14:46.792 "name": null, 00:14:46.792 "uuid": "19561c1c-7634-4518-af16-94ccb338f994", 00:14:46.792 "is_configured": false, 00:14:46.792 "data_offset": 0, 00:14:46.792 "data_size": 65536 00:14:46.792 }, 00:14:46.792 { 00:14:46.792 "name": null, 00:14:46.792 "uuid": "17debe14-c790-4452-9b9e-c9bb97d952b5", 00:14:46.792 "is_configured": false, 00:14:46.792 "data_offset": 0, 00:14:46.792 "data_size": 65536 00:14:46.792 }, 00:14:46.792 { 00:14:46.792 "name": "BaseBdev4", 00:14:46.792 "uuid": "4573aaba-90c1-4c08-9fa0-a0980652b1cf", 00:14:46.792 "is_configured": true, 00:14:46.792 "data_offset": 0, 00:14:46.792 "data_size": 65536 00:14:46.792 } 00:14:46.792 ] 00:14:46.792 }' 00:14:46.792 18:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.792 18:13:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.361 18:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.361 18:13:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.361 18:13:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.362 18:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:47.362 18:13:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.362 18:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:47.362 18:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:47.362 18:13:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.362 18:13:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.362 [2024-12-06 18:13:12.735100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:47.362 18:13:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.362 18:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:47.362 18:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:47.362 18:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:47.362 18:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:47.362 18:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:47.362 18:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:47.362 18:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.362 18:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.362 18:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.362 18:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.362 18:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.362 18:13:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.362 18:13:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.362 18:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.362 18:13:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.362 18:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.362 "name": "Existed_Raid", 00:14:47.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.362 "strip_size_kb": 0, 00:14:47.362 "state": "configuring", 00:14:47.362 "raid_level": "raid1", 00:14:47.362 "superblock": false, 00:14:47.362 "num_base_bdevs": 4, 00:14:47.362 "num_base_bdevs_discovered": 3, 00:14:47.362 "num_base_bdevs_operational": 4, 00:14:47.362 "base_bdevs_list": [ 00:14:47.362 { 00:14:47.362 "name": "BaseBdev1", 00:14:47.362 "uuid": "4f50d3c2-d562-4e93-be05-9a51a5c5d167", 00:14:47.362 "is_configured": true, 00:14:47.362 "data_offset": 0, 00:14:47.362 "data_size": 65536 00:14:47.362 }, 00:14:47.362 { 00:14:47.362 "name": null, 00:14:47.362 "uuid": "19561c1c-7634-4518-af16-94ccb338f994", 00:14:47.362 "is_configured": false, 00:14:47.362 "data_offset": 0, 00:14:47.362 "data_size": 65536 00:14:47.362 }, 00:14:47.362 { 00:14:47.362 "name": "BaseBdev3", 00:14:47.362 "uuid": "17debe14-c790-4452-9b9e-c9bb97d952b5", 00:14:47.362 "is_configured": true, 00:14:47.362 "data_offset": 0, 00:14:47.362 "data_size": 65536 00:14:47.362 }, 00:14:47.362 { 00:14:47.362 "name": "BaseBdev4", 00:14:47.362 "uuid": "4573aaba-90c1-4c08-9fa0-a0980652b1cf", 00:14:47.362 "is_configured": true, 00:14:47.362 "data_offset": 0, 00:14:47.362 "data_size": 65536 00:14:47.362 } 00:14:47.362 ] 00:14:47.362 }' 00:14:47.362 18:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.362 18:13:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.929 18:13:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:47.929 18:13:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.929 18:13:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.929 18:13:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.929 18:13:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.929 18:13:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:47.929 18:13:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:47.929 18:13:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.929 18:13:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.929 [2024-12-06 18:13:13.311327] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:47.929 18:13:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.929 18:13:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:47.929 18:13:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:47.929 18:13:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:47.929 18:13:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:47.929 18:13:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:47.929 18:13:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:47.929 18:13:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.929 18:13:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.930 18:13:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.930 18:13:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.930 18:13:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.930 18:13:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.930 18:13:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.930 18:13:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.930 18:13:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.189 18:13:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.189 "name": "Existed_Raid", 00:14:48.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.189 "strip_size_kb": 0, 00:14:48.189 "state": "configuring", 00:14:48.189 "raid_level": "raid1", 00:14:48.189 "superblock": false, 00:14:48.189 "num_base_bdevs": 4, 00:14:48.189 "num_base_bdevs_discovered": 2, 00:14:48.189 "num_base_bdevs_operational": 4, 00:14:48.189 "base_bdevs_list": [ 00:14:48.189 { 00:14:48.189 "name": null, 00:14:48.189 "uuid": "4f50d3c2-d562-4e93-be05-9a51a5c5d167", 00:14:48.189 "is_configured": false, 00:14:48.189 "data_offset": 0, 00:14:48.189 "data_size": 65536 00:14:48.189 }, 00:14:48.189 { 00:14:48.189 "name": null, 00:14:48.189 "uuid": "19561c1c-7634-4518-af16-94ccb338f994", 00:14:48.189 "is_configured": false, 00:14:48.189 "data_offset": 0, 00:14:48.189 "data_size": 65536 00:14:48.189 }, 00:14:48.189 { 00:14:48.189 "name": "BaseBdev3", 00:14:48.189 "uuid": "17debe14-c790-4452-9b9e-c9bb97d952b5", 00:14:48.189 "is_configured": true, 00:14:48.189 "data_offset": 0, 00:14:48.189 "data_size": 65536 00:14:48.189 }, 00:14:48.189 { 00:14:48.189 "name": "BaseBdev4", 00:14:48.189 "uuid": "4573aaba-90c1-4c08-9fa0-a0980652b1cf", 00:14:48.189 "is_configured": true, 00:14:48.189 "data_offset": 0, 00:14:48.189 "data_size": 65536 00:14:48.189 } 00:14:48.189 ] 00:14:48.189 }' 00:14:48.189 18:13:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.189 18:13:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.448 18:13:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.448 18:13:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.448 18:13:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:48.448 18:13:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.448 18:13:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.706 18:13:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:48.706 18:13:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:48.706 18:13:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.706 18:13:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.706 [2024-12-06 18:13:13.987716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:48.706 18:13:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.706 18:13:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:48.706 18:13:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:48.706 18:13:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:48.706 18:13:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:48.706 18:13:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:48.706 18:13:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:48.706 18:13:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.706 18:13:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.706 18:13:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.706 18:13:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.706 18:13:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.707 18:13:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.707 18:13:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.707 18:13:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.707 18:13:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.707 18:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.707 "name": "Existed_Raid", 00:14:48.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.707 "strip_size_kb": 0, 00:14:48.707 "state": "configuring", 00:14:48.707 "raid_level": "raid1", 00:14:48.707 "superblock": false, 00:14:48.707 "num_base_bdevs": 4, 00:14:48.707 "num_base_bdevs_discovered": 3, 00:14:48.707 "num_base_bdevs_operational": 4, 00:14:48.707 "base_bdevs_list": [ 00:14:48.707 { 00:14:48.707 "name": null, 00:14:48.707 "uuid": "4f50d3c2-d562-4e93-be05-9a51a5c5d167", 00:14:48.707 "is_configured": false, 00:14:48.707 "data_offset": 0, 00:14:48.707 "data_size": 65536 00:14:48.707 }, 00:14:48.707 { 00:14:48.707 "name": "BaseBdev2", 00:14:48.707 "uuid": "19561c1c-7634-4518-af16-94ccb338f994", 00:14:48.707 "is_configured": true, 00:14:48.707 "data_offset": 0, 00:14:48.707 "data_size": 65536 00:14:48.707 }, 00:14:48.707 { 00:14:48.707 "name": "BaseBdev3", 00:14:48.707 "uuid": "17debe14-c790-4452-9b9e-c9bb97d952b5", 00:14:48.707 "is_configured": true, 00:14:48.707 "data_offset": 0, 00:14:48.707 "data_size": 65536 00:14:48.707 }, 00:14:48.707 { 00:14:48.707 "name": "BaseBdev4", 00:14:48.707 "uuid": "4573aaba-90c1-4c08-9fa0-a0980652b1cf", 00:14:48.707 "is_configured": true, 00:14:48.707 "data_offset": 0, 00:14:48.707 "data_size": 65536 00:14:48.707 } 00:14:48.707 ] 00:14:48.707 }' 00:14:48.707 18:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.707 18:13:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.274 18:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.274 18:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:49.274 18:13:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.274 18:13:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.274 18:13:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.274 18:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:49.274 18:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:49.274 18:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.274 18:13:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.274 18:13:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.274 18:13:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.274 18:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4f50d3c2-d562-4e93-be05-9a51a5c5d167 00:14:49.274 18:13:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.274 18:13:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.274 [2024-12-06 18:13:14.705829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:49.274 [2024-12-06 18:13:14.705880] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:49.274 [2024-12-06 18:13:14.705896] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:49.274 [2024-12-06 18:13:14.706218] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:49.274 [2024-12-06 18:13:14.706419] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:49.274 [2024-12-06 18:13:14.706435] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:49.274 [2024-12-06 18:13:14.706746] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.274 NewBaseBdev 00:14:49.274 18:13:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.274 18:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:49.274 18:13:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:49.274 18:13:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:49.274 18:13:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:49.274 18:13:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:49.274 18:13:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:49.274 18:13:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:49.274 18:13:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.274 18:13:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.274 18:13:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.274 18:13:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:49.274 18:13:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.274 18:13:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.274 [ 00:14:49.274 { 00:14:49.274 "name": "NewBaseBdev", 00:14:49.274 "aliases": [ 00:14:49.274 "4f50d3c2-d562-4e93-be05-9a51a5c5d167" 00:14:49.274 ], 00:14:49.274 "product_name": "Malloc disk", 00:14:49.274 "block_size": 512, 00:14:49.274 "num_blocks": 65536, 00:14:49.274 "uuid": "4f50d3c2-d562-4e93-be05-9a51a5c5d167", 00:14:49.274 "assigned_rate_limits": { 00:14:49.274 "rw_ios_per_sec": 0, 00:14:49.274 "rw_mbytes_per_sec": 0, 00:14:49.274 "r_mbytes_per_sec": 0, 00:14:49.274 "w_mbytes_per_sec": 0 00:14:49.274 }, 00:14:49.274 "claimed": true, 00:14:49.274 "claim_type": "exclusive_write", 00:14:49.274 "zoned": false, 00:14:49.274 "supported_io_types": { 00:14:49.274 "read": true, 00:14:49.274 "write": true, 00:14:49.274 "unmap": true, 00:14:49.274 "flush": true, 00:14:49.274 "reset": true, 00:14:49.274 "nvme_admin": false, 00:14:49.275 "nvme_io": false, 00:14:49.275 "nvme_io_md": false, 00:14:49.275 "write_zeroes": true, 00:14:49.275 "zcopy": true, 00:14:49.275 "get_zone_info": false, 00:14:49.275 "zone_management": false, 00:14:49.275 "zone_append": false, 00:14:49.275 "compare": false, 00:14:49.275 "compare_and_write": false, 00:14:49.275 "abort": true, 00:14:49.275 "seek_hole": false, 00:14:49.275 "seek_data": false, 00:14:49.275 "copy": true, 00:14:49.275 "nvme_iov_md": false 00:14:49.275 }, 00:14:49.275 "memory_domains": [ 00:14:49.275 { 00:14:49.275 "dma_device_id": "system", 00:14:49.275 "dma_device_type": 1 00:14:49.275 }, 00:14:49.275 { 00:14:49.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.275 "dma_device_type": 2 00:14:49.275 } 00:14:49.275 ], 00:14:49.275 "driver_specific": {} 00:14:49.275 } 00:14:49.275 ] 00:14:49.275 18:13:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.275 18:13:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:49.275 18:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:14:49.275 18:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.275 18:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.275 18:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:49.275 18:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:49.275 18:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:49.275 18:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.275 18:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.275 18:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.275 18:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.275 18:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.275 18:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.275 18:13:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.275 18:13:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.275 18:13:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.533 18:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.533 "name": "Existed_Raid", 00:14:49.533 "uuid": "903d4f10-172a-4fbd-b583-2119035aa7af", 00:14:49.533 "strip_size_kb": 0, 00:14:49.533 "state": "online", 00:14:49.533 "raid_level": "raid1", 00:14:49.533 "superblock": false, 00:14:49.533 "num_base_bdevs": 4, 00:14:49.533 "num_base_bdevs_discovered": 4, 00:14:49.533 "num_base_bdevs_operational": 4, 00:14:49.533 "base_bdevs_list": [ 00:14:49.533 { 00:14:49.533 "name": "NewBaseBdev", 00:14:49.533 "uuid": "4f50d3c2-d562-4e93-be05-9a51a5c5d167", 00:14:49.533 "is_configured": true, 00:14:49.533 "data_offset": 0, 00:14:49.533 "data_size": 65536 00:14:49.533 }, 00:14:49.533 { 00:14:49.533 "name": "BaseBdev2", 00:14:49.533 "uuid": "19561c1c-7634-4518-af16-94ccb338f994", 00:14:49.533 "is_configured": true, 00:14:49.533 "data_offset": 0, 00:14:49.533 "data_size": 65536 00:14:49.533 }, 00:14:49.533 { 00:14:49.533 "name": "BaseBdev3", 00:14:49.533 "uuid": "17debe14-c790-4452-9b9e-c9bb97d952b5", 00:14:49.533 "is_configured": true, 00:14:49.533 "data_offset": 0, 00:14:49.533 "data_size": 65536 00:14:49.533 }, 00:14:49.533 { 00:14:49.533 "name": "BaseBdev4", 00:14:49.533 "uuid": "4573aaba-90c1-4c08-9fa0-a0980652b1cf", 00:14:49.533 "is_configured": true, 00:14:49.533 "data_offset": 0, 00:14:49.533 "data_size": 65536 00:14:49.533 } 00:14:49.533 ] 00:14:49.533 }' 00:14:49.533 18:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.533 18:13:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.791 18:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:49.791 18:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:49.791 18:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:49.791 18:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:49.791 18:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:49.791 18:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:49.791 18:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:49.791 18:13:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.791 18:13:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.791 18:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:49.791 [2024-12-06 18:13:15.254496] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:49.791 18:13:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.791 18:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:49.791 "name": "Existed_Raid", 00:14:49.791 "aliases": [ 00:14:49.791 "903d4f10-172a-4fbd-b583-2119035aa7af" 00:14:49.791 ], 00:14:49.791 "product_name": "Raid Volume", 00:14:49.791 "block_size": 512, 00:14:49.791 "num_blocks": 65536, 00:14:49.791 "uuid": "903d4f10-172a-4fbd-b583-2119035aa7af", 00:14:49.791 "assigned_rate_limits": { 00:14:49.791 "rw_ios_per_sec": 0, 00:14:49.791 "rw_mbytes_per_sec": 0, 00:14:49.791 "r_mbytes_per_sec": 0, 00:14:49.791 "w_mbytes_per_sec": 0 00:14:49.791 }, 00:14:49.791 "claimed": false, 00:14:49.791 "zoned": false, 00:14:49.791 "supported_io_types": { 00:14:49.791 "read": true, 00:14:49.791 "write": true, 00:14:49.791 "unmap": false, 00:14:49.791 "flush": false, 00:14:49.791 "reset": true, 00:14:49.791 "nvme_admin": false, 00:14:49.791 "nvme_io": false, 00:14:49.791 "nvme_io_md": false, 00:14:49.791 "write_zeroes": true, 00:14:49.791 "zcopy": false, 00:14:49.791 "get_zone_info": false, 00:14:49.791 "zone_management": false, 00:14:49.791 "zone_append": false, 00:14:49.791 "compare": false, 00:14:49.791 "compare_and_write": false, 00:14:49.791 "abort": false, 00:14:49.791 "seek_hole": false, 00:14:49.791 "seek_data": false, 00:14:49.791 "copy": false, 00:14:49.791 "nvme_iov_md": false 00:14:49.791 }, 00:14:49.791 "memory_domains": [ 00:14:49.791 { 00:14:49.791 "dma_device_id": "system", 00:14:49.791 "dma_device_type": 1 00:14:49.791 }, 00:14:49.791 { 00:14:49.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.791 "dma_device_type": 2 00:14:49.791 }, 00:14:49.791 { 00:14:49.791 "dma_device_id": "system", 00:14:49.791 "dma_device_type": 1 00:14:49.792 }, 00:14:49.792 { 00:14:49.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.792 "dma_device_type": 2 00:14:49.792 }, 00:14:49.792 { 00:14:49.792 "dma_device_id": "system", 00:14:49.792 "dma_device_type": 1 00:14:49.792 }, 00:14:49.792 { 00:14:49.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.792 "dma_device_type": 2 00:14:49.792 }, 00:14:49.792 { 00:14:49.792 "dma_device_id": "system", 00:14:49.792 "dma_device_type": 1 00:14:49.792 }, 00:14:49.792 { 00:14:49.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.792 "dma_device_type": 2 00:14:49.792 } 00:14:49.792 ], 00:14:49.792 "driver_specific": { 00:14:49.792 "raid": { 00:14:49.792 "uuid": "903d4f10-172a-4fbd-b583-2119035aa7af", 00:14:49.792 "strip_size_kb": 0, 00:14:49.792 "state": "online", 00:14:49.792 "raid_level": "raid1", 00:14:49.792 "superblock": false, 00:14:49.792 "num_base_bdevs": 4, 00:14:49.792 "num_base_bdevs_discovered": 4, 00:14:49.792 "num_base_bdevs_operational": 4, 00:14:49.792 "base_bdevs_list": [ 00:14:49.792 { 00:14:49.792 "name": "NewBaseBdev", 00:14:49.792 "uuid": "4f50d3c2-d562-4e93-be05-9a51a5c5d167", 00:14:49.792 "is_configured": true, 00:14:49.792 "data_offset": 0, 00:14:49.792 "data_size": 65536 00:14:49.792 }, 00:14:49.792 { 00:14:49.792 "name": "BaseBdev2", 00:14:49.792 "uuid": "19561c1c-7634-4518-af16-94ccb338f994", 00:14:49.792 "is_configured": true, 00:14:49.792 "data_offset": 0, 00:14:49.792 "data_size": 65536 00:14:49.792 }, 00:14:49.792 { 00:14:49.792 "name": "BaseBdev3", 00:14:49.792 "uuid": "17debe14-c790-4452-9b9e-c9bb97d952b5", 00:14:49.792 "is_configured": true, 00:14:49.792 "data_offset": 0, 00:14:49.792 "data_size": 65536 00:14:49.792 }, 00:14:49.792 { 00:14:49.792 "name": "BaseBdev4", 00:14:49.792 "uuid": "4573aaba-90c1-4c08-9fa0-a0980652b1cf", 00:14:49.792 "is_configured": true, 00:14:49.792 "data_offset": 0, 00:14:49.792 "data_size": 65536 00:14:49.792 } 00:14:49.792 ] 00:14:49.792 } 00:14:49.792 } 00:14:49.792 }' 00:14:49.792 18:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:50.050 18:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:50.050 BaseBdev2 00:14:50.050 BaseBdev3 00:14:50.050 BaseBdev4' 00:14:50.050 18:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.050 18:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:50.050 18:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:50.050 18:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.050 18:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:50.050 18:13:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.050 18:13:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.050 18:13:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.050 18:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:50.050 18:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:50.050 18:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:50.050 18:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:50.050 18:13:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.050 18:13:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.050 18:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.050 18:13:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.050 18:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:50.050 18:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:50.050 18:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:50.050 18:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.050 18:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:50.050 18:13:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.050 18:13:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.050 18:13:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.050 18:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:50.050 18:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:50.050 18:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:50.051 18:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.051 18:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:50.051 18:13:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.051 18:13:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.310 18:13:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.310 18:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:50.310 18:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:50.310 18:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:50.310 18:13:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.310 18:13:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.310 [2024-12-06 18:13:15.598116] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:50.310 [2024-12-06 18:13:15.598151] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:50.310 [2024-12-06 18:13:15.598257] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:50.310 [2024-12-06 18:13:15.598646] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:50.310 [2024-12-06 18:13:15.598670] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:50.310 18:13:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.310 18:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73371 00:14:50.310 18:13:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73371 ']' 00:14:50.310 18:13:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73371 00:14:50.310 18:13:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:50.310 18:13:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:50.310 18:13:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73371 00:14:50.310 killing process with pid 73371 00:14:50.310 18:13:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:50.310 18:13:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:50.310 18:13:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73371' 00:14:50.310 18:13:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73371 00:14:50.310 [2024-12-06 18:13:15.627151] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:50.310 18:13:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73371 00:14:50.570 [2024-12-06 18:13:15.976552] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:51.507 ************************************ 00:14:51.507 END TEST raid_state_function_test 00:14:51.507 ************************************ 00:14:51.507 18:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:51.507 00:14:51.507 real 0m12.784s 00:14:51.507 user 0m21.277s 00:14:51.507 sys 0m1.730s 00:14:51.507 18:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:51.507 18:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.765 18:13:17 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:14:51.765 18:13:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:51.765 18:13:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:51.765 18:13:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:51.765 ************************************ 00:14:51.765 START TEST raid_state_function_test_sb 00:14:51.765 ************************************ 00:14:51.765 18:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:14:51.765 18:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:14:51.765 18:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:51.765 18:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:51.765 18:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:51.765 18:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:51.765 18:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:51.765 18:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:51.765 18:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:51.765 18:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:51.765 18:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:51.765 18:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:51.765 18:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:51.765 18:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:51.765 18:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:51.765 18:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:51.765 18:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:51.765 18:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:51.765 18:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:51.765 18:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:51.765 18:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:51.765 18:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:51.765 18:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:51.765 18:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:51.765 18:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:51.765 18:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:14:51.765 18:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:14:51.765 18:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:51.765 18:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:51.765 18:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74060 00:14:51.765 18:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:51.765 Process raid pid: 74060 00:14:51.766 18:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74060' 00:14:51.766 18:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74060 00:14:51.766 18:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 74060 ']' 00:14:51.766 18:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.766 18:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:51.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.766 18:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.766 18:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:51.766 18:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.766 [2024-12-06 18:13:17.208998] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:14:51.766 [2024-12-06 18:13:17.209167] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:52.024 [2024-12-06 18:13:17.385890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.024 [2024-12-06 18:13:17.513457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.282 [2024-12-06 18:13:17.722079] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:52.282 [2024-12-06 18:13:17.722125] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:52.848 18:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:52.848 18:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:52.849 18:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:52.849 18:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.849 18:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.849 [2024-12-06 18:13:18.133505] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:52.849 [2024-12-06 18:13:18.133576] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:52.849 [2024-12-06 18:13:18.133593] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:52.849 [2024-12-06 18:13:18.133610] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:52.849 [2024-12-06 18:13:18.133620] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:52.849 [2024-12-06 18:13:18.133635] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:52.849 [2024-12-06 18:13:18.133645] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:52.849 [2024-12-06 18:13:18.133659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:52.849 18:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.849 18:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:52.849 18:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.849 18:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:52.849 18:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:52.849 18:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:52.849 18:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:52.849 18:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.849 18:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.849 18:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.849 18:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.849 18:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.849 18:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.849 18:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.849 18:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.849 18:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.849 18:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.849 "name": "Existed_Raid", 00:14:52.849 "uuid": "169adb32-a645-4c41-ae3f-8770049c11d5", 00:14:52.849 "strip_size_kb": 0, 00:14:52.849 "state": "configuring", 00:14:52.849 "raid_level": "raid1", 00:14:52.849 "superblock": true, 00:14:52.849 "num_base_bdevs": 4, 00:14:52.849 "num_base_bdevs_discovered": 0, 00:14:52.849 "num_base_bdevs_operational": 4, 00:14:52.849 "base_bdevs_list": [ 00:14:52.849 { 00:14:52.849 "name": "BaseBdev1", 00:14:52.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.849 "is_configured": false, 00:14:52.849 "data_offset": 0, 00:14:52.849 "data_size": 0 00:14:52.849 }, 00:14:52.849 { 00:14:52.849 "name": "BaseBdev2", 00:14:52.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.849 "is_configured": false, 00:14:52.849 "data_offset": 0, 00:14:52.849 "data_size": 0 00:14:52.849 }, 00:14:52.849 { 00:14:52.849 "name": "BaseBdev3", 00:14:52.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.849 "is_configured": false, 00:14:52.849 "data_offset": 0, 00:14:52.849 "data_size": 0 00:14:52.849 }, 00:14:52.849 { 00:14:52.849 "name": "BaseBdev4", 00:14:52.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.849 "is_configured": false, 00:14:52.849 "data_offset": 0, 00:14:52.849 "data_size": 0 00:14:52.849 } 00:14:52.849 ] 00:14:52.849 }' 00:14:52.849 18:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.849 18:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.107 18:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:53.107 18:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.107 18:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.367 [2024-12-06 18:13:18.629585] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:53.367 [2024-12-06 18:13:18.629642] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:53.367 18:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.367 18:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:53.367 18:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.367 18:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.367 [2024-12-06 18:13:18.637572] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:53.367 [2024-12-06 18:13:18.637634] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:53.367 [2024-12-06 18:13:18.637650] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:53.367 [2024-12-06 18:13:18.637667] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:53.367 [2024-12-06 18:13:18.637677] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:53.367 [2024-12-06 18:13:18.637691] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:53.367 [2024-12-06 18:13:18.637700] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:53.367 [2024-12-06 18:13:18.637715] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:53.367 18:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.367 18:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:53.367 18:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.367 18:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.367 [2024-12-06 18:13:18.682751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:53.367 BaseBdev1 00:14:53.367 18:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.367 18:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:53.367 18:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:53.367 18:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:53.367 18:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:53.367 18:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:53.367 18:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:53.367 18:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:53.367 18:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.367 18:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.367 18:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.367 18:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:53.367 18:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.367 18:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.367 [ 00:14:53.367 { 00:14:53.367 "name": "BaseBdev1", 00:14:53.367 "aliases": [ 00:14:53.367 "79f62d5f-730b-4280-bc6f-f0ce85446cb9" 00:14:53.367 ], 00:14:53.367 "product_name": "Malloc disk", 00:14:53.367 "block_size": 512, 00:14:53.367 "num_blocks": 65536, 00:14:53.367 "uuid": "79f62d5f-730b-4280-bc6f-f0ce85446cb9", 00:14:53.367 "assigned_rate_limits": { 00:14:53.367 "rw_ios_per_sec": 0, 00:14:53.367 "rw_mbytes_per_sec": 0, 00:14:53.367 "r_mbytes_per_sec": 0, 00:14:53.367 "w_mbytes_per_sec": 0 00:14:53.367 }, 00:14:53.367 "claimed": true, 00:14:53.367 "claim_type": "exclusive_write", 00:14:53.367 "zoned": false, 00:14:53.367 "supported_io_types": { 00:14:53.367 "read": true, 00:14:53.367 "write": true, 00:14:53.367 "unmap": true, 00:14:53.367 "flush": true, 00:14:53.367 "reset": true, 00:14:53.367 "nvme_admin": false, 00:14:53.367 "nvme_io": false, 00:14:53.367 "nvme_io_md": false, 00:14:53.367 "write_zeroes": true, 00:14:53.367 "zcopy": true, 00:14:53.367 "get_zone_info": false, 00:14:53.367 "zone_management": false, 00:14:53.367 "zone_append": false, 00:14:53.367 "compare": false, 00:14:53.367 "compare_and_write": false, 00:14:53.367 "abort": true, 00:14:53.367 "seek_hole": false, 00:14:53.367 "seek_data": false, 00:14:53.367 "copy": true, 00:14:53.367 "nvme_iov_md": false 00:14:53.367 }, 00:14:53.367 "memory_domains": [ 00:14:53.367 { 00:14:53.367 "dma_device_id": "system", 00:14:53.367 "dma_device_type": 1 00:14:53.367 }, 00:14:53.367 { 00:14:53.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.367 "dma_device_type": 2 00:14:53.367 } 00:14:53.367 ], 00:14:53.367 "driver_specific": {} 00:14:53.367 } 00:14:53.367 ] 00:14:53.367 18:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.367 18:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:53.367 18:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:53.367 18:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:53.367 18:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:53.367 18:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:53.367 18:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:53.367 18:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:53.367 18:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.367 18:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.367 18:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.367 18:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.367 18:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.367 18:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.367 18:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.367 18:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.367 18:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.367 18:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.367 "name": "Existed_Raid", 00:14:53.367 "uuid": "727e7ba8-5fe8-4f3b-8b74-6061a97b4c68", 00:14:53.367 "strip_size_kb": 0, 00:14:53.367 "state": "configuring", 00:14:53.367 "raid_level": "raid1", 00:14:53.367 "superblock": true, 00:14:53.367 "num_base_bdevs": 4, 00:14:53.367 "num_base_bdevs_discovered": 1, 00:14:53.367 "num_base_bdevs_operational": 4, 00:14:53.367 "base_bdevs_list": [ 00:14:53.367 { 00:14:53.367 "name": "BaseBdev1", 00:14:53.367 "uuid": "79f62d5f-730b-4280-bc6f-f0ce85446cb9", 00:14:53.367 "is_configured": true, 00:14:53.367 "data_offset": 2048, 00:14:53.367 "data_size": 63488 00:14:53.367 }, 00:14:53.367 { 00:14:53.367 "name": "BaseBdev2", 00:14:53.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.367 "is_configured": false, 00:14:53.367 "data_offset": 0, 00:14:53.367 "data_size": 0 00:14:53.367 }, 00:14:53.367 { 00:14:53.367 "name": "BaseBdev3", 00:14:53.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.367 "is_configured": false, 00:14:53.367 "data_offset": 0, 00:14:53.367 "data_size": 0 00:14:53.367 }, 00:14:53.367 { 00:14:53.367 "name": "BaseBdev4", 00:14:53.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.367 "is_configured": false, 00:14:53.367 "data_offset": 0, 00:14:53.367 "data_size": 0 00:14:53.367 } 00:14:53.367 ] 00:14:53.367 }' 00:14:53.367 18:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.367 18:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.934 18:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:53.934 18:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.934 18:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.934 [2024-12-06 18:13:19.267053] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:53.935 [2024-12-06 18:13:19.267132] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:53.935 18:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.935 18:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:53.935 18:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.935 18:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.935 [2024-12-06 18:13:19.275075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:53.935 [2024-12-06 18:13:19.277536] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:53.935 [2024-12-06 18:13:19.277609] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:53.935 [2024-12-06 18:13:19.277626] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:53.935 [2024-12-06 18:13:19.277644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:53.935 [2024-12-06 18:13:19.277654] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:53.935 [2024-12-06 18:13:19.277668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:53.935 18:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.935 18:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:53.935 18:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:53.935 18:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:53.935 18:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:53.935 18:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:53.935 18:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:53.935 18:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:53.935 18:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:53.935 18:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.935 18:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.935 18:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.935 18:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.935 18:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.935 18:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.935 18:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.935 18:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.935 18:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.935 18:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.935 "name": "Existed_Raid", 00:14:53.935 "uuid": "bf189357-c645-4792-aa7a-dec12f0d4cff", 00:14:53.935 "strip_size_kb": 0, 00:14:53.935 "state": "configuring", 00:14:53.935 "raid_level": "raid1", 00:14:53.935 "superblock": true, 00:14:53.935 "num_base_bdevs": 4, 00:14:53.935 "num_base_bdevs_discovered": 1, 00:14:53.935 "num_base_bdevs_operational": 4, 00:14:53.935 "base_bdevs_list": [ 00:14:53.935 { 00:14:53.935 "name": "BaseBdev1", 00:14:53.935 "uuid": "79f62d5f-730b-4280-bc6f-f0ce85446cb9", 00:14:53.935 "is_configured": true, 00:14:53.935 "data_offset": 2048, 00:14:53.935 "data_size": 63488 00:14:53.935 }, 00:14:53.935 { 00:14:53.935 "name": "BaseBdev2", 00:14:53.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.935 "is_configured": false, 00:14:53.935 "data_offset": 0, 00:14:53.935 "data_size": 0 00:14:53.935 }, 00:14:53.935 { 00:14:53.935 "name": "BaseBdev3", 00:14:53.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.935 "is_configured": false, 00:14:53.935 "data_offset": 0, 00:14:53.935 "data_size": 0 00:14:53.935 }, 00:14:53.935 { 00:14:53.935 "name": "BaseBdev4", 00:14:53.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.935 "is_configured": false, 00:14:53.935 "data_offset": 0, 00:14:53.935 "data_size": 0 00:14:53.935 } 00:14:53.935 ] 00:14:53.935 }' 00:14:53.935 18:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.935 18:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.507 18:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:54.507 18:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.507 18:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.507 [2024-12-06 18:13:19.833275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:54.507 BaseBdev2 00:14:54.507 18:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.507 18:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:54.507 18:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:54.507 18:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:54.507 18:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:54.507 18:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:54.507 18:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:54.507 18:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:54.507 18:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.507 18:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.507 18:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.507 18:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:54.507 18:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.507 18:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.507 [ 00:14:54.507 { 00:14:54.507 "name": "BaseBdev2", 00:14:54.507 "aliases": [ 00:14:54.507 "f0c9d88e-da6c-4a79-8028-1e35a5aa3ec3" 00:14:54.507 ], 00:14:54.507 "product_name": "Malloc disk", 00:14:54.507 "block_size": 512, 00:14:54.507 "num_blocks": 65536, 00:14:54.507 "uuid": "f0c9d88e-da6c-4a79-8028-1e35a5aa3ec3", 00:14:54.507 "assigned_rate_limits": { 00:14:54.507 "rw_ios_per_sec": 0, 00:14:54.507 "rw_mbytes_per_sec": 0, 00:14:54.507 "r_mbytes_per_sec": 0, 00:14:54.507 "w_mbytes_per_sec": 0 00:14:54.507 }, 00:14:54.507 "claimed": true, 00:14:54.507 "claim_type": "exclusive_write", 00:14:54.507 "zoned": false, 00:14:54.507 "supported_io_types": { 00:14:54.507 "read": true, 00:14:54.507 "write": true, 00:14:54.507 "unmap": true, 00:14:54.507 "flush": true, 00:14:54.507 "reset": true, 00:14:54.507 "nvme_admin": false, 00:14:54.507 "nvme_io": false, 00:14:54.507 "nvme_io_md": false, 00:14:54.507 "write_zeroes": true, 00:14:54.507 "zcopy": true, 00:14:54.507 "get_zone_info": false, 00:14:54.507 "zone_management": false, 00:14:54.507 "zone_append": false, 00:14:54.507 "compare": false, 00:14:54.507 "compare_and_write": false, 00:14:54.507 "abort": true, 00:14:54.507 "seek_hole": false, 00:14:54.507 "seek_data": false, 00:14:54.507 "copy": true, 00:14:54.507 "nvme_iov_md": false 00:14:54.507 }, 00:14:54.507 "memory_domains": [ 00:14:54.507 { 00:14:54.507 "dma_device_id": "system", 00:14:54.507 "dma_device_type": 1 00:14:54.507 }, 00:14:54.507 { 00:14:54.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.507 "dma_device_type": 2 00:14:54.507 } 00:14:54.507 ], 00:14:54.507 "driver_specific": {} 00:14:54.507 } 00:14:54.507 ] 00:14:54.507 18:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.507 18:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:54.507 18:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:54.507 18:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:54.507 18:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:54.507 18:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.507 18:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:54.507 18:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:54.507 18:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:54.507 18:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:54.507 18:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.507 18:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.507 18:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.507 18:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.507 18:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.507 18:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.507 18:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.507 18:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.507 18:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.507 18:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.507 "name": "Existed_Raid", 00:14:54.507 "uuid": "bf189357-c645-4792-aa7a-dec12f0d4cff", 00:14:54.507 "strip_size_kb": 0, 00:14:54.507 "state": "configuring", 00:14:54.507 "raid_level": "raid1", 00:14:54.507 "superblock": true, 00:14:54.507 "num_base_bdevs": 4, 00:14:54.507 "num_base_bdevs_discovered": 2, 00:14:54.507 "num_base_bdevs_operational": 4, 00:14:54.507 "base_bdevs_list": [ 00:14:54.507 { 00:14:54.507 "name": "BaseBdev1", 00:14:54.507 "uuid": "79f62d5f-730b-4280-bc6f-f0ce85446cb9", 00:14:54.507 "is_configured": true, 00:14:54.507 "data_offset": 2048, 00:14:54.507 "data_size": 63488 00:14:54.507 }, 00:14:54.507 { 00:14:54.507 "name": "BaseBdev2", 00:14:54.507 "uuid": "f0c9d88e-da6c-4a79-8028-1e35a5aa3ec3", 00:14:54.507 "is_configured": true, 00:14:54.507 "data_offset": 2048, 00:14:54.507 "data_size": 63488 00:14:54.507 }, 00:14:54.507 { 00:14:54.507 "name": "BaseBdev3", 00:14:54.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.507 "is_configured": false, 00:14:54.507 "data_offset": 0, 00:14:54.507 "data_size": 0 00:14:54.507 }, 00:14:54.507 { 00:14:54.507 "name": "BaseBdev4", 00:14:54.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.507 "is_configured": false, 00:14:54.507 "data_offset": 0, 00:14:54.507 "data_size": 0 00:14:54.507 } 00:14:54.507 ] 00:14:54.507 }' 00:14:54.507 18:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.507 18:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.073 18:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:55.073 18:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.073 18:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.073 [2024-12-06 18:13:20.421667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:55.073 BaseBdev3 00:14:55.073 18:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.073 18:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:55.073 18:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:55.073 18:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:55.073 18:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:55.073 18:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:55.073 18:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:55.073 18:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:55.073 18:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.073 18:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.073 18:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.073 18:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:55.073 18:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.073 18:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.073 [ 00:14:55.073 { 00:14:55.073 "name": "BaseBdev3", 00:14:55.073 "aliases": [ 00:14:55.073 "688528ba-cecd-44d5-be1b-0d5a5daf6549" 00:14:55.073 ], 00:14:55.073 "product_name": "Malloc disk", 00:14:55.073 "block_size": 512, 00:14:55.073 "num_blocks": 65536, 00:14:55.073 "uuid": "688528ba-cecd-44d5-be1b-0d5a5daf6549", 00:14:55.073 "assigned_rate_limits": { 00:14:55.073 "rw_ios_per_sec": 0, 00:14:55.073 "rw_mbytes_per_sec": 0, 00:14:55.073 "r_mbytes_per_sec": 0, 00:14:55.073 "w_mbytes_per_sec": 0 00:14:55.073 }, 00:14:55.073 "claimed": true, 00:14:55.073 "claim_type": "exclusive_write", 00:14:55.073 "zoned": false, 00:14:55.073 "supported_io_types": { 00:14:55.073 "read": true, 00:14:55.073 "write": true, 00:14:55.073 "unmap": true, 00:14:55.073 "flush": true, 00:14:55.073 "reset": true, 00:14:55.073 "nvme_admin": false, 00:14:55.073 "nvme_io": false, 00:14:55.073 "nvme_io_md": false, 00:14:55.073 "write_zeroes": true, 00:14:55.073 "zcopy": true, 00:14:55.073 "get_zone_info": false, 00:14:55.073 "zone_management": false, 00:14:55.073 "zone_append": false, 00:14:55.073 "compare": false, 00:14:55.073 "compare_and_write": false, 00:14:55.073 "abort": true, 00:14:55.073 "seek_hole": false, 00:14:55.073 "seek_data": false, 00:14:55.073 "copy": true, 00:14:55.073 "nvme_iov_md": false 00:14:55.073 }, 00:14:55.073 "memory_domains": [ 00:14:55.073 { 00:14:55.073 "dma_device_id": "system", 00:14:55.073 "dma_device_type": 1 00:14:55.073 }, 00:14:55.073 { 00:14:55.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.073 "dma_device_type": 2 00:14:55.073 } 00:14:55.073 ], 00:14:55.073 "driver_specific": {} 00:14:55.073 } 00:14:55.073 ] 00:14:55.073 18:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.073 18:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:55.073 18:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:55.073 18:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:55.073 18:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:55.073 18:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.073 18:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:55.073 18:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:55.073 18:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:55.073 18:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:55.073 18:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.073 18:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.073 18:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.073 18:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.073 18:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.073 18:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.073 18:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.073 18:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.073 18:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.073 18:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.073 "name": "Existed_Raid", 00:14:55.073 "uuid": "bf189357-c645-4792-aa7a-dec12f0d4cff", 00:14:55.073 "strip_size_kb": 0, 00:14:55.073 "state": "configuring", 00:14:55.073 "raid_level": "raid1", 00:14:55.073 "superblock": true, 00:14:55.073 "num_base_bdevs": 4, 00:14:55.073 "num_base_bdevs_discovered": 3, 00:14:55.073 "num_base_bdevs_operational": 4, 00:14:55.073 "base_bdevs_list": [ 00:14:55.073 { 00:14:55.073 "name": "BaseBdev1", 00:14:55.073 "uuid": "79f62d5f-730b-4280-bc6f-f0ce85446cb9", 00:14:55.073 "is_configured": true, 00:14:55.073 "data_offset": 2048, 00:14:55.073 "data_size": 63488 00:14:55.073 }, 00:14:55.073 { 00:14:55.073 "name": "BaseBdev2", 00:14:55.073 "uuid": "f0c9d88e-da6c-4a79-8028-1e35a5aa3ec3", 00:14:55.073 "is_configured": true, 00:14:55.073 "data_offset": 2048, 00:14:55.073 "data_size": 63488 00:14:55.073 }, 00:14:55.073 { 00:14:55.073 "name": "BaseBdev3", 00:14:55.073 "uuid": "688528ba-cecd-44d5-be1b-0d5a5daf6549", 00:14:55.073 "is_configured": true, 00:14:55.073 "data_offset": 2048, 00:14:55.073 "data_size": 63488 00:14:55.073 }, 00:14:55.073 { 00:14:55.073 "name": "BaseBdev4", 00:14:55.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.073 "is_configured": false, 00:14:55.073 "data_offset": 0, 00:14:55.073 "data_size": 0 00:14:55.073 } 00:14:55.073 ] 00:14:55.073 }' 00:14:55.073 18:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.073 18:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.639 18:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:55.639 18:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.639 18:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.639 [2024-12-06 18:13:20.998068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:55.639 [2024-12-06 18:13:20.998388] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:55.639 [2024-12-06 18:13:20.998410] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:55.639 BaseBdev4 00:14:55.639 [2024-12-06 18:13:20.998791] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:55.639 [2024-12-06 18:13:20.999006] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:55.639 [2024-12-06 18:13:20.999028] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:55.639 [2024-12-06 18:13:20.999209] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:55.639 18:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.639 18:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:55.639 18:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:55.639 18:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:55.639 18:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:55.639 18:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:55.639 18:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:55.639 18:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:55.639 18:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.639 18:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.639 18:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.639 18:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:55.639 18:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.639 18:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.639 [ 00:14:55.639 { 00:14:55.639 "name": "BaseBdev4", 00:14:55.639 "aliases": [ 00:14:55.639 "6e8a0655-ba7f-4c14-9c72-7a27f4b906f0" 00:14:55.639 ], 00:14:55.639 "product_name": "Malloc disk", 00:14:55.639 "block_size": 512, 00:14:55.639 "num_blocks": 65536, 00:14:55.639 "uuid": "6e8a0655-ba7f-4c14-9c72-7a27f4b906f0", 00:14:55.639 "assigned_rate_limits": { 00:14:55.639 "rw_ios_per_sec": 0, 00:14:55.639 "rw_mbytes_per_sec": 0, 00:14:55.639 "r_mbytes_per_sec": 0, 00:14:55.639 "w_mbytes_per_sec": 0 00:14:55.639 }, 00:14:55.639 "claimed": true, 00:14:55.639 "claim_type": "exclusive_write", 00:14:55.639 "zoned": false, 00:14:55.639 "supported_io_types": { 00:14:55.639 "read": true, 00:14:55.639 "write": true, 00:14:55.639 "unmap": true, 00:14:55.639 "flush": true, 00:14:55.639 "reset": true, 00:14:55.639 "nvme_admin": false, 00:14:55.639 "nvme_io": false, 00:14:55.639 "nvme_io_md": false, 00:14:55.639 "write_zeroes": true, 00:14:55.639 "zcopy": true, 00:14:55.639 "get_zone_info": false, 00:14:55.639 "zone_management": false, 00:14:55.639 "zone_append": false, 00:14:55.639 "compare": false, 00:14:55.639 "compare_and_write": false, 00:14:55.639 "abort": true, 00:14:55.639 "seek_hole": false, 00:14:55.639 "seek_data": false, 00:14:55.639 "copy": true, 00:14:55.640 "nvme_iov_md": false 00:14:55.640 }, 00:14:55.640 "memory_domains": [ 00:14:55.640 { 00:14:55.640 "dma_device_id": "system", 00:14:55.640 "dma_device_type": 1 00:14:55.640 }, 00:14:55.640 { 00:14:55.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.640 "dma_device_type": 2 00:14:55.640 } 00:14:55.640 ], 00:14:55.640 "driver_specific": {} 00:14:55.640 } 00:14:55.640 ] 00:14:55.640 18:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.640 18:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:55.640 18:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:55.640 18:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:55.640 18:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:14:55.640 18:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.640 18:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:55.640 18:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:55.640 18:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:55.640 18:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:55.640 18:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.640 18:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.640 18:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.640 18:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.640 18:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.640 18:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.640 18:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.640 18:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.640 18:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.640 18:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.640 "name": "Existed_Raid", 00:14:55.640 "uuid": "bf189357-c645-4792-aa7a-dec12f0d4cff", 00:14:55.640 "strip_size_kb": 0, 00:14:55.640 "state": "online", 00:14:55.640 "raid_level": "raid1", 00:14:55.640 "superblock": true, 00:14:55.640 "num_base_bdevs": 4, 00:14:55.640 "num_base_bdevs_discovered": 4, 00:14:55.640 "num_base_bdevs_operational": 4, 00:14:55.640 "base_bdevs_list": [ 00:14:55.640 { 00:14:55.640 "name": "BaseBdev1", 00:14:55.640 "uuid": "79f62d5f-730b-4280-bc6f-f0ce85446cb9", 00:14:55.640 "is_configured": true, 00:14:55.640 "data_offset": 2048, 00:14:55.640 "data_size": 63488 00:14:55.640 }, 00:14:55.640 { 00:14:55.640 "name": "BaseBdev2", 00:14:55.640 "uuid": "f0c9d88e-da6c-4a79-8028-1e35a5aa3ec3", 00:14:55.640 "is_configured": true, 00:14:55.640 "data_offset": 2048, 00:14:55.640 "data_size": 63488 00:14:55.640 }, 00:14:55.640 { 00:14:55.640 "name": "BaseBdev3", 00:14:55.640 "uuid": "688528ba-cecd-44d5-be1b-0d5a5daf6549", 00:14:55.640 "is_configured": true, 00:14:55.640 "data_offset": 2048, 00:14:55.640 "data_size": 63488 00:14:55.640 }, 00:14:55.640 { 00:14:55.640 "name": "BaseBdev4", 00:14:55.640 "uuid": "6e8a0655-ba7f-4c14-9c72-7a27f4b906f0", 00:14:55.640 "is_configured": true, 00:14:55.640 "data_offset": 2048, 00:14:55.640 "data_size": 63488 00:14:55.640 } 00:14:55.640 ] 00:14:55.640 }' 00:14:55.640 18:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.640 18:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.207 18:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:56.207 18:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:56.207 18:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:56.207 18:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:56.207 18:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:56.207 18:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:56.207 18:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:56.207 18:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:56.207 18:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.207 18:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.207 [2024-12-06 18:13:21.570764] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:56.207 18:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.207 18:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:56.207 "name": "Existed_Raid", 00:14:56.207 "aliases": [ 00:14:56.207 "bf189357-c645-4792-aa7a-dec12f0d4cff" 00:14:56.207 ], 00:14:56.207 "product_name": "Raid Volume", 00:14:56.207 "block_size": 512, 00:14:56.207 "num_blocks": 63488, 00:14:56.207 "uuid": "bf189357-c645-4792-aa7a-dec12f0d4cff", 00:14:56.207 "assigned_rate_limits": { 00:14:56.207 "rw_ios_per_sec": 0, 00:14:56.207 "rw_mbytes_per_sec": 0, 00:14:56.207 "r_mbytes_per_sec": 0, 00:14:56.207 "w_mbytes_per_sec": 0 00:14:56.207 }, 00:14:56.207 "claimed": false, 00:14:56.207 "zoned": false, 00:14:56.207 "supported_io_types": { 00:14:56.207 "read": true, 00:14:56.207 "write": true, 00:14:56.207 "unmap": false, 00:14:56.207 "flush": false, 00:14:56.207 "reset": true, 00:14:56.207 "nvme_admin": false, 00:14:56.207 "nvme_io": false, 00:14:56.207 "nvme_io_md": false, 00:14:56.207 "write_zeroes": true, 00:14:56.207 "zcopy": false, 00:14:56.207 "get_zone_info": false, 00:14:56.207 "zone_management": false, 00:14:56.207 "zone_append": false, 00:14:56.207 "compare": false, 00:14:56.207 "compare_and_write": false, 00:14:56.207 "abort": false, 00:14:56.207 "seek_hole": false, 00:14:56.207 "seek_data": false, 00:14:56.207 "copy": false, 00:14:56.207 "nvme_iov_md": false 00:14:56.207 }, 00:14:56.207 "memory_domains": [ 00:14:56.207 { 00:14:56.207 "dma_device_id": "system", 00:14:56.207 "dma_device_type": 1 00:14:56.207 }, 00:14:56.207 { 00:14:56.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.207 "dma_device_type": 2 00:14:56.207 }, 00:14:56.207 { 00:14:56.207 "dma_device_id": "system", 00:14:56.207 "dma_device_type": 1 00:14:56.207 }, 00:14:56.207 { 00:14:56.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.207 "dma_device_type": 2 00:14:56.207 }, 00:14:56.207 { 00:14:56.207 "dma_device_id": "system", 00:14:56.207 "dma_device_type": 1 00:14:56.207 }, 00:14:56.207 { 00:14:56.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.207 "dma_device_type": 2 00:14:56.207 }, 00:14:56.207 { 00:14:56.207 "dma_device_id": "system", 00:14:56.207 "dma_device_type": 1 00:14:56.207 }, 00:14:56.207 { 00:14:56.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.207 "dma_device_type": 2 00:14:56.207 } 00:14:56.207 ], 00:14:56.208 "driver_specific": { 00:14:56.208 "raid": { 00:14:56.208 "uuid": "bf189357-c645-4792-aa7a-dec12f0d4cff", 00:14:56.208 "strip_size_kb": 0, 00:14:56.208 "state": "online", 00:14:56.208 "raid_level": "raid1", 00:14:56.208 "superblock": true, 00:14:56.208 "num_base_bdevs": 4, 00:14:56.208 "num_base_bdevs_discovered": 4, 00:14:56.208 "num_base_bdevs_operational": 4, 00:14:56.208 "base_bdevs_list": [ 00:14:56.208 { 00:14:56.208 "name": "BaseBdev1", 00:14:56.208 "uuid": "79f62d5f-730b-4280-bc6f-f0ce85446cb9", 00:14:56.208 "is_configured": true, 00:14:56.208 "data_offset": 2048, 00:14:56.208 "data_size": 63488 00:14:56.208 }, 00:14:56.208 { 00:14:56.208 "name": "BaseBdev2", 00:14:56.208 "uuid": "f0c9d88e-da6c-4a79-8028-1e35a5aa3ec3", 00:14:56.208 "is_configured": true, 00:14:56.208 "data_offset": 2048, 00:14:56.208 "data_size": 63488 00:14:56.208 }, 00:14:56.208 { 00:14:56.208 "name": "BaseBdev3", 00:14:56.208 "uuid": "688528ba-cecd-44d5-be1b-0d5a5daf6549", 00:14:56.208 "is_configured": true, 00:14:56.208 "data_offset": 2048, 00:14:56.208 "data_size": 63488 00:14:56.208 }, 00:14:56.208 { 00:14:56.208 "name": "BaseBdev4", 00:14:56.208 "uuid": "6e8a0655-ba7f-4c14-9c72-7a27f4b906f0", 00:14:56.208 "is_configured": true, 00:14:56.208 "data_offset": 2048, 00:14:56.208 "data_size": 63488 00:14:56.208 } 00:14:56.208 ] 00:14:56.208 } 00:14:56.208 } 00:14:56.208 }' 00:14:56.208 18:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:56.208 18:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:56.208 BaseBdev2 00:14:56.208 BaseBdev3 00:14:56.208 BaseBdev4' 00:14:56.208 18:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:56.208 18:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:56.208 18:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:56.208 18:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:56.208 18:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.208 18:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.208 18:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:56.467 18:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.467 18:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:56.467 18:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:56.467 18:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:56.467 18:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:56.467 18:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:56.467 18:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.467 18:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.467 18:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.467 18:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:56.467 18:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:56.467 18:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:56.467 18:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:56.467 18:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.467 18:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.467 18:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:56.467 18:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.467 18:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:56.467 18:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:56.467 18:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:56.467 18:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:56.467 18:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:56.467 18:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.467 18:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.467 18:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.467 18:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:56.467 18:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:56.467 18:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:56.467 18:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.467 18:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.467 [2024-12-06 18:13:21.950509] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:56.726 18:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.726 18:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:56.726 18:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:14:56.726 18:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:56.726 18:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:56.726 18:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:56.726 18:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:56.726 18:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.726 18:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:56.726 18:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:56.726 18:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:56.726 18:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:56.726 18:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.726 18:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.726 18:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.726 18:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.726 18:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.726 18:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.726 18:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.726 18:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.726 18:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.726 18:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.726 "name": "Existed_Raid", 00:14:56.726 "uuid": "bf189357-c645-4792-aa7a-dec12f0d4cff", 00:14:56.726 "strip_size_kb": 0, 00:14:56.726 "state": "online", 00:14:56.726 "raid_level": "raid1", 00:14:56.726 "superblock": true, 00:14:56.726 "num_base_bdevs": 4, 00:14:56.726 "num_base_bdevs_discovered": 3, 00:14:56.726 "num_base_bdevs_operational": 3, 00:14:56.726 "base_bdevs_list": [ 00:14:56.726 { 00:14:56.726 "name": null, 00:14:56.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.726 "is_configured": false, 00:14:56.726 "data_offset": 0, 00:14:56.726 "data_size": 63488 00:14:56.726 }, 00:14:56.726 { 00:14:56.726 "name": "BaseBdev2", 00:14:56.726 "uuid": "f0c9d88e-da6c-4a79-8028-1e35a5aa3ec3", 00:14:56.726 "is_configured": true, 00:14:56.726 "data_offset": 2048, 00:14:56.726 "data_size": 63488 00:14:56.726 }, 00:14:56.726 { 00:14:56.726 "name": "BaseBdev3", 00:14:56.726 "uuid": "688528ba-cecd-44d5-be1b-0d5a5daf6549", 00:14:56.726 "is_configured": true, 00:14:56.726 "data_offset": 2048, 00:14:56.726 "data_size": 63488 00:14:56.726 }, 00:14:56.726 { 00:14:56.726 "name": "BaseBdev4", 00:14:56.726 "uuid": "6e8a0655-ba7f-4c14-9c72-7a27f4b906f0", 00:14:56.726 "is_configured": true, 00:14:56.726 "data_offset": 2048, 00:14:56.726 "data_size": 63488 00:14:56.726 } 00:14:56.726 ] 00:14:56.726 }' 00:14:56.726 18:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.726 18:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.295 18:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:57.295 18:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:57.295 18:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:57.295 18:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.295 18:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.295 18:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.295 18:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.295 18:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:57.295 18:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:57.295 18:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:57.295 18:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.295 18:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.295 [2024-12-06 18:13:22.639350] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:57.295 18:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.295 18:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:57.295 18:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:57.295 18:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.295 18:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.295 18:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.295 18:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:57.295 18:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.295 18:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:57.295 18:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:57.295 18:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:57.295 18:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.295 18:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.295 [2024-12-06 18:13:22.783220] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:57.554 18:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.555 18:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:57.555 18:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:57.555 18:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.555 18:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:57.555 18:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.555 18:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.555 18:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.555 18:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:57.555 18:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:57.555 18:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:57.555 18:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.555 18:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.555 [2024-12-06 18:13:22.924905] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:57.555 [2024-12-06 18:13:22.925172] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:57.555 [2024-12-06 18:13:23.011028] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:57.555 [2024-12-06 18:13:23.011102] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:57.555 [2024-12-06 18:13:23.011123] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:57.555 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.555 18:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:57.555 18:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:57.555 18:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.555 18:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:57.555 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.555 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.555 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.555 18:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:57.555 18:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:57.555 18:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:57.555 18:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:57.555 18:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:57.555 18:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:57.555 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.555 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.814 BaseBdev2 00:14:57.814 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.814 18:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:57.814 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:57.814 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:57.814 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:57.814 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:57.814 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:57.814 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:57.814 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.814 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.814 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.814 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:57.814 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.814 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.814 [ 00:14:57.814 { 00:14:57.814 "name": "BaseBdev2", 00:14:57.814 "aliases": [ 00:14:57.814 "1ed8b012-c59d-4c38-893b-4d4b4ddcff2e" 00:14:57.814 ], 00:14:57.814 "product_name": "Malloc disk", 00:14:57.814 "block_size": 512, 00:14:57.814 "num_blocks": 65536, 00:14:57.814 "uuid": "1ed8b012-c59d-4c38-893b-4d4b4ddcff2e", 00:14:57.814 "assigned_rate_limits": { 00:14:57.814 "rw_ios_per_sec": 0, 00:14:57.814 "rw_mbytes_per_sec": 0, 00:14:57.814 "r_mbytes_per_sec": 0, 00:14:57.814 "w_mbytes_per_sec": 0 00:14:57.814 }, 00:14:57.814 "claimed": false, 00:14:57.814 "zoned": false, 00:14:57.814 "supported_io_types": { 00:14:57.814 "read": true, 00:14:57.814 "write": true, 00:14:57.814 "unmap": true, 00:14:57.814 "flush": true, 00:14:57.814 "reset": true, 00:14:57.814 "nvme_admin": false, 00:14:57.814 "nvme_io": false, 00:14:57.814 "nvme_io_md": false, 00:14:57.815 "write_zeroes": true, 00:14:57.815 "zcopy": true, 00:14:57.815 "get_zone_info": false, 00:14:57.815 "zone_management": false, 00:14:57.815 "zone_append": false, 00:14:57.815 "compare": false, 00:14:57.815 "compare_and_write": false, 00:14:57.815 "abort": true, 00:14:57.815 "seek_hole": false, 00:14:57.815 "seek_data": false, 00:14:57.815 "copy": true, 00:14:57.815 "nvme_iov_md": false 00:14:57.815 }, 00:14:57.815 "memory_domains": [ 00:14:57.815 { 00:14:57.815 "dma_device_id": "system", 00:14:57.815 "dma_device_type": 1 00:14:57.815 }, 00:14:57.815 { 00:14:57.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.815 "dma_device_type": 2 00:14:57.815 } 00:14:57.815 ], 00:14:57.815 "driver_specific": {} 00:14:57.815 } 00:14:57.815 ] 00:14:57.815 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.815 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:57.815 18:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:57.815 18:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:57.815 18:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:57.815 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.815 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.815 BaseBdev3 00:14:57.815 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.815 18:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:57.815 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:57.815 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:57.815 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:57.815 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:57.815 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:57.815 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:57.815 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.815 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.815 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.815 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:57.815 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.815 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.815 [ 00:14:57.815 { 00:14:57.815 "name": "BaseBdev3", 00:14:57.815 "aliases": [ 00:14:57.815 "d3919021-d324-4a07-b6c3-181c5a06332d" 00:14:57.815 ], 00:14:57.815 "product_name": "Malloc disk", 00:14:57.815 "block_size": 512, 00:14:57.815 "num_blocks": 65536, 00:14:57.815 "uuid": "d3919021-d324-4a07-b6c3-181c5a06332d", 00:14:57.815 "assigned_rate_limits": { 00:14:57.815 "rw_ios_per_sec": 0, 00:14:57.815 "rw_mbytes_per_sec": 0, 00:14:57.815 "r_mbytes_per_sec": 0, 00:14:57.815 "w_mbytes_per_sec": 0 00:14:57.815 }, 00:14:57.815 "claimed": false, 00:14:57.815 "zoned": false, 00:14:57.815 "supported_io_types": { 00:14:57.815 "read": true, 00:14:57.815 "write": true, 00:14:57.815 "unmap": true, 00:14:57.815 "flush": true, 00:14:57.815 "reset": true, 00:14:57.815 "nvme_admin": false, 00:14:57.815 "nvme_io": false, 00:14:57.815 "nvme_io_md": false, 00:14:57.815 "write_zeroes": true, 00:14:57.815 "zcopy": true, 00:14:57.815 "get_zone_info": false, 00:14:57.815 "zone_management": false, 00:14:57.815 "zone_append": false, 00:14:57.815 "compare": false, 00:14:57.815 "compare_and_write": false, 00:14:57.815 "abort": true, 00:14:57.815 "seek_hole": false, 00:14:57.815 "seek_data": false, 00:14:57.815 "copy": true, 00:14:57.815 "nvme_iov_md": false 00:14:57.815 }, 00:14:57.815 "memory_domains": [ 00:14:57.815 { 00:14:57.815 "dma_device_id": "system", 00:14:57.815 "dma_device_type": 1 00:14:57.815 }, 00:14:57.815 { 00:14:57.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.815 "dma_device_type": 2 00:14:57.815 } 00:14:57.815 ], 00:14:57.815 "driver_specific": {} 00:14:57.815 } 00:14:57.815 ] 00:14:57.815 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.815 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:57.815 18:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:57.815 18:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:57.815 18:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:57.815 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.815 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.815 BaseBdev4 00:14:57.815 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.815 18:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:57.815 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:57.815 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:57.815 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:57.815 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:57.815 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:57.815 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:57.815 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.815 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.815 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.815 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:57.815 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.815 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.815 [ 00:14:57.815 { 00:14:57.815 "name": "BaseBdev4", 00:14:57.815 "aliases": [ 00:14:57.815 "a593ce87-72f3-4f2f-b035-4b4f06490d8a" 00:14:57.815 ], 00:14:57.815 "product_name": "Malloc disk", 00:14:57.815 "block_size": 512, 00:14:57.815 "num_blocks": 65536, 00:14:57.815 "uuid": "a593ce87-72f3-4f2f-b035-4b4f06490d8a", 00:14:57.815 "assigned_rate_limits": { 00:14:57.816 "rw_ios_per_sec": 0, 00:14:57.816 "rw_mbytes_per_sec": 0, 00:14:57.816 "r_mbytes_per_sec": 0, 00:14:57.816 "w_mbytes_per_sec": 0 00:14:57.816 }, 00:14:57.816 "claimed": false, 00:14:57.816 "zoned": false, 00:14:57.816 "supported_io_types": { 00:14:57.816 "read": true, 00:14:57.816 "write": true, 00:14:57.816 "unmap": true, 00:14:57.816 "flush": true, 00:14:57.816 "reset": true, 00:14:57.816 "nvme_admin": false, 00:14:57.816 "nvme_io": false, 00:14:57.816 "nvme_io_md": false, 00:14:57.816 "write_zeroes": true, 00:14:57.816 "zcopy": true, 00:14:57.816 "get_zone_info": false, 00:14:57.816 "zone_management": false, 00:14:57.816 "zone_append": false, 00:14:57.816 "compare": false, 00:14:57.816 "compare_and_write": false, 00:14:57.816 "abort": true, 00:14:57.816 "seek_hole": false, 00:14:57.816 "seek_data": false, 00:14:57.816 "copy": true, 00:14:57.816 "nvme_iov_md": false 00:14:57.816 }, 00:14:57.816 "memory_domains": [ 00:14:57.816 { 00:14:57.816 "dma_device_id": "system", 00:14:57.816 "dma_device_type": 1 00:14:57.816 }, 00:14:57.816 { 00:14:57.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.816 "dma_device_type": 2 00:14:57.816 } 00:14:57.816 ], 00:14:57.816 "driver_specific": {} 00:14:57.816 } 00:14:57.816 ] 00:14:57.816 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.816 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:57.816 18:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:57.816 18:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:57.816 18:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:57.816 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.816 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.816 [2024-12-06 18:13:23.282727] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:57.816 [2024-12-06 18:13:23.282947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:57.816 [2024-12-06 18:13:23.283116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:57.816 [2024-12-06 18:13:23.285662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:57.816 [2024-12-06 18:13:23.285865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:57.816 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.816 18:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:57.816 18:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.816 18:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:57.816 18:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:57.816 18:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:57.816 18:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:57.816 18:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.816 18:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.816 18:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.816 18:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.816 18:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.816 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.816 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.816 18:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.816 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.078 18:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.078 "name": "Existed_Raid", 00:14:58.078 "uuid": "8e6b5dfc-6e22-49d1-9048-8bde588310c4", 00:14:58.078 "strip_size_kb": 0, 00:14:58.078 "state": "configuring", 00:14:58.078 "raid_level": "raid1", 00:14:58.078 "superblock": true, 00:14:58.078 "num_base_bdevs": 4, 00:14:58.079 "num_base_bdevs_discovered": 3, 00:14:58.079 "num_base_bdevs_operational": 4, 00:14:58.079 "base_bdevs_list": [ 00:14:58.079 { 00:14:58.079 "name": "BaseBdev1", 00:14:58.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.079 "is_configured": false, 00:14:58.079 "data_offset": 0, 00:14:58.079 "data_size": 0 00:14:58.079 }, 00:14:58.079 { 00:14:58.079 "name": "BaseBdev2", 00:14:58.079 "uuid": "1ed8b012-c59d-4c38-893b-4d4b4ddcff2e", 00:14:58.079 "is_configured": true, 00:14:58.079 "data_offset": 2048, 00:14:58.079 "data_size": 63488 00:14:58.079 }, 00:14:58.079 { 00:14:58.079 "name": "BaseBdev3", 00:14:58.079 "uuid": "d3919021-d324-4a07-b6c3-181c5a06332d", 00:14:58.079 "is_configured": true, 00:14:58.079 "data_offset": 2048, 00:14:58.079 "data_size": 63488 00:14:58.079 }, 00:14:58.079 { 00:14:58.079 "name": "BaseBdev4", 00:14:58.079 "uuid": "a593ce87-72f3-4f2f-b035-4b4f06490d8a", 00:14:58.079 "is_configured": true, 00:14:58.079 "data_offset": 2048, 00:14:58.079 "data_size": 63488 00:14:58.079 } 00:14:58.079 ] 00:14:58.079 }' 00:14:58.079 18:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.079 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.360 18:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:58.360 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.360 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.360 [2024-12-06 18:13:23.802910] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:58.360 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.360 18:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:58.360 18:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:58.360 18:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:58.360 18:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:58.360 18:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:58.360 18:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:58.360 18:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.360 18:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.360 18:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.360 18:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.360 18:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.360 18:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.360 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.360 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.360 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.360 18:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.360 "name": "Existed_Raid", 00:14:58.360 "uuid": "8e6b5dfc-6e22-49d1-9048-8bde588310c4", 00:14:58.360 "strip_size_kb": 0, 00:14:58.360 "state": "configuring", 00:14:58.360 "raid_level": "raid1", 00:14:58.360 "superblock": true, 00:14:58.360 "num_base_bdevs": 4, 00:14:58.360 "num_base_bdevs_discovered": 2, 00:14:58.360 "num_base_bdevs_operational": 4, 00:14:58.360 "base_bdevs_list": [ 00:14:58.360 { 00:14:58.360 "name": "BaseBdev1", 00:14:58.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.360 "is_configured": false, 00:14:58.360 "data_offset": 0, 00:14:58.360 "data_size": 0 00:14:58.360 }, 00:14:58.360 { 00:14:58.360 "name": null, 00:14:58.360 "uuid": "1ed8b012-c59d-4c38-893b-4d4b4ddcff2e", 00:14:58.360 "is_configured": false, 00:14:58.360 "data_offset": 0, 00:14:58.360 "data_size": 63488 00:14:58.360 }, 00:14:58.360 { 00:14:58.360 "name": "BaseBdev3", 00:14:58.360 "uuid": "d3919021-d324-4a07-b6c3-181c5a06332d", 00:14:58.360 "is_configured": true, 00:14:58.360 "data_offset": 2048, 00:14:58.360 "data_size": 63488 00:14:58.360 }, 00:14:58.360 { 00:14:58.360 "name": "BaseBdev4", 00:14:58.360 "uuid": "a593ce87-72f3-4f2f-b035-4b4f06490d8a", 00:14:58.360 "is_configured": true, 00:14:58.360 "data_offset": 2048, 00:14:58.360 "data_size": 63488 00:14:58.360 } 00:14:58.360 ] 00:14:58.360 }' 00:14:58.360 18:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.360 18:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.928 18:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.928 18:13:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.928 18:13:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.928 18:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:58.928 18:13:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.928 18:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:58.928 18:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:58.928 18:13:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.928 18:13:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.928 [2024-12-06 18:13:24.428642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:58.928 BaseBdev1 00:14:58.928 18:13:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.928 18:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:58.928 18:13:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:58.928 18:13:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:58.928 18:13:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:58.928 18:13:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:58.928 18:13:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:58.928 18:13:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:58.928 18:13:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.928 18:13:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.928 18:13:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.928 18:13:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:58.928 18:13:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.928 18:13:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.186 [ 00:14:59.186 { 00:14:59.186 "name": "BaseBdev1", 00:14:59.186 "aliases": [ 00:14:59.186 "04a2cc9e-3d9e-495a-aac3-591d1bfa25e1" 00:14:59.186 ], 00:14:59.186 "product_name": "Malloc disk", 00:14:59.186 "block_size": 512, 00:14:59.186 "num_blocks": 65536, 00:14:59.186 "uuid": "04a2cc9e-3d9e-495a-aac3-591d1bfa25e1", 00:14:59.186 "assigned_rate_limits": { 00:14:59.186 "rw_ios_per_sec": 0, 00:14:59.186 "rw_mbytes_per_sec": 0, 00:14:59.186 "r_mbytes_per_sec": 0, 00:14:59.186 "w_mbytes_per_sec": 0 00:14:59.186 }, 00:14:59.186 "claimed": true, 00:14:59.186 "claim_type": "exclusive_write", 00:14:59.186 "zoned": false, 00:14:59.186 "supported_io_types": { 00:14:59.186 "read": true, 00:14:59.186 "write": true, 00:14:59.186 "unmap": true, 00:14:59.186 "flush": true, 00:14:59.186 "reset": true, 00:14:59.186 "nvme_admin": false, 00:14:59.186 "nvme_io": false, 00:14:59.186 "nvme_io_md": false, 00:14:59.187 "write_zeroes": true, 00:14:59.187 "zcopy": true, 00:14:59.187 "get_zone_info": false, 00:14:59.187 "zone_management": false, 00:14:59.187 "zone_append": false, 00:14:59.187 "compare": false, 00:14:59.187 "compare_and_write": false, 00:14:59.187 "abort": true, 00:14:59.187 "seek_hole": false, 00:14:59.187 "seek_data": false, 00:14:59.187 "copy": true, 00:14:59.187 "nvme_iov_md": false 00:14:59.187 }, 00:14:59.187 "memory_domains": [ 00:14:59.187 { 00:14:59.187 "dma_device_id": "system", 00:14:59.187 "dma_device_type": 1 00:14:59.187 }, 00:14:59.187 { 00:14:59.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.187 "dma_device_type": 2 00:14:59.187 } 00:14:59.187 ], 00:14:59.187 "driver_specific": {} 00:14:59.187 } 00:14:59.187 ] 00:14:59.187 18:13:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.187 18:13:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:59.187 18:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:59.187 18:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:59.187 18:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:59.187 18:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:59.187 18:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:59.187 18:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:59.187 18:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.187 18:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.187 18:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.187 18:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.187 18:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.187 18:13:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.187 18:13:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.187 18:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.187 18:13:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.187 18:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.187 "name": "Existed_Raid", 00:14:59.187 "uuid": "8e6b5dfc-6e22-49d1-9048-8bde588310c4", 00:14:59.187 "strip_size_kb": 0, 00:14:59.187 "state": "configuring", 00:14:59.187 "raid_level": "raid1", 00:14:59.187 "superblock": true, 00:14:59.187 "num_base_bdevs": 4, 00:14:59.187 "num_base_bdevs_discovered": 3, 00:14:59.187 "num_base_bdevs_operational": 4, 00:14:59.187 "base_bdevs_list": [ 00:14:59.187 { 00:14:59.187 "name": "BaseBdev1", 00:14:59.187 "uuid": "04a2cc9e-3d9e-495a-aac3-591d1bfa25e1", 00:14:59.187 "is_configured": true, 00:14:59.187 "data_offset": 2048, 00:14:59.187 "data_size": 63488 00:14:59.187 }, 00:14:59.187 { 00:14:59.187 "name": null, 00:14:59.187 "uuid": "1ed8b012-c59d-4c38-893b-4d4b4ddcff2e", 00:14:59.187 "is_configured": false, 00:14:59.187 "data_offset": 0, 00:14:59.187 "data_size": 63488 00:14:59.187 }, 00:14:59.187 { 00:14:59.187 "name": "BaseBdev3", 00:14:59.187 "uuid": "d3919021-d324-4a07-b6c3-181c5a06332d", 00:14:59.187 "is_configured": true, 00:14:59.187 "data_offset": 2048, 00:14:59.187 "data_size": 63488 00:14:59.187 }, 00:14:59.187 { 00:14:59.187 "name": "BaseBdev4", 00:14:59.187 "uuid": "a593ce87-72f3-4f2f-b035-4b4f06490d8a", 00:14:59.187 "is_configured": true, 00:14:59.187 "data_offset": 2048, 00:14:59.187 "data_size": 63488 00:14:59.187 } 00:14:59.187 ] 00:14:59.187 }' 00:14:59.187 18:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.187 18:13:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.754 18:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:59.754 18:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.754 18:13:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.754 18:13:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.754 18:13:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.754 18:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:59.754 18:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:59.754 18:13:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.754 18:13:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.754 [2024-12-06 18:13:25.024923] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:59.754 18:13:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.754 18:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:59.754 18:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:59.754 18:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:59.754 18:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:59.754 18:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:59.754 18:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:59.754 18:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.754 18:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.754 18:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.754 18:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.754 18:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.754 18:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.754 18:13:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.754 18:13:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.754 18:13:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.754 18:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.754 "name": "Existed_Raid", 00:14:59.754 "uuid": "8e6b5dfc-6e22-49d1-9048-8bde588310c4", 00:14:59.754 "strip_size_kb": 0, 00:14:59.754 "state": "configuring", 00:14:59.754 "raid_level": "raid1", 00:14:59.754 "superblock": true, 00:14:59.754 "num_base_bdevs": 4, 00:14:59.754 "num_base_bdevs_discovered": 2, 00:14:59.754 "num_base_bdevs_operational": 4, 00:14:59.754 "base_bdevs_list": [ 00:14:59.754 { 00:14:59.754 "name": "BaseBdev1", 00:14:59.754 "uuid": "04a2cc9e-3d9e-495a-aac3-591d1bfa25e1", 00:14:59.754 "is_configured": true, 00:14:59.754 "data_offset": 2048, 00:14:59.754 "data_size": 63488 00:14:59.754 }, 00:14:59.754 { 00:14:59.754 "name": null, 00:14:59.754 "uuid": "1ed8b012-c59d-4c38-893b-4d4b4ddcff2e", 00:14:59.754 "is_configured": false, 00:14:59.754 "data_offset": 0, 00:14:59.754 "data_size": 63488 00:14:59.754 }, 00:14:59.754 { 00:14:59.754 "name": null, 00:14:59.754 "uuid": "d3919021-d324-4a07-b6c3-181c5a06332d", 00:14:59.754 "is_configured": false, 00:14:59.754 "data_offset": 0, 00:14:59.754 "data_size": 63488 00:14:59.754 }, 00:14:59.754 { 00:14:59.754 "name": "BaseBdev4", 00:14:59.754 "uuid": "a593ce87-72f3-4f2f-b035-4b4f06490d8a", 00:14:59.754 "is_configured": true, 00:14:59.754 "data_offset": 2048, 00:14:59.754 "data_size": 63488 00:14:59.754 } 00:14:59.754 ] 00:14:59.754 }' 00:14:59.754 18:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.754 18:13:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.320 18:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.320 18:13:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.320 18:13:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.320 18:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:00.320 18:13:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.320 18:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:00.320 18:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:00.320 18:13:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.320 18:13:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.320 [2024-12-06 18:13:25.609076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:00.320 18:13:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.320 18:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:00.320 18:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.320 18:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.320 18:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:00.320 18:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:00.320 18:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:00.320 18:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.320 18:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.320 18:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.320 18:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.320 18:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.320 18:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.320 18:13:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.320 18:13:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.320 18:13:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.320 18:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.320 "name": "Existed_Raid", 00:15:00.320 "uuid": "8e6b5dfc-6e22-49d1-9048-8bde588310c4", 00:15:00.320 "strip_size_kb": 0, 00:15:00.320 "state": "configuring", 00:15:00.320 "raid_level": "raid1", 00:15:00.320 "superblock": true, 00:15:00.320 "num_base_bdevs": 4, 00:15:00.320 "num_base_bdevs_discovered": 3, 00:15:00.320 "num_base_bdevs_operational": 4, 00:15:00.320 "base_bdevs_list": [ 00:15:00.320 { 00:15:00.320 "name": "BaseBdev1", 00:15:00.320 "uuid": "04a2cc9e-3d9e-495a-aac3-591d1bfa25e1", 00:15:00.320 "is_configured": true, 00:15:00.320 "data_offset": 2048, 00:15:00.320 "data_size": 63488 00:15:00.320 }, 00:15:00.320 { 00:15:00.320 "name": null, 00:15:00.320 "uuid": "1ed8b012-c59d-4c38-893b-4d4b4ddcff2e", 00:15:00.320 "is_configured": false, 00:15:00.320 "data_offset": 0, 00:15:00.320 "data_size": 63488 00:15:00.320 }, 00:15:00.320 { 00:15:00.320 "name": "BaseBdev3", 00:15:00.320 "uuid": "d3919021-d324-4a07-b6c3-181c5a06332d", 00:15:00.320 "is_configured": true, 00:15:00.320 "data_offset": 2048, 00:15:00.320 "data_size": 63488 00:15:00.320 }, 00:15:00.320 { 00:15:00.320 "name": "BaseBdev4", 00:15:00.320 "uuid": "a593ce87-72f3-4f2f-b035-4b4f06490d8a", 00:15:00.320 "is_configured": true, 00:15:00.320 "data_offset": 2048, 00:15:00.320 "data_size": 63488 00:15:00.320 } 00:15:00.320 ] 00:15:00.320 }' 00:15:00.320 18:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.320 18:13:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.886 18:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.886 18:13:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.886 18:13:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.886 18:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:00.886 18:13:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.886 18:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:00.886 18:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:00.886 18:13:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.886 18:13:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.886 [2024-12-06 18:13:26.201503] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:00.886 18:13:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.886 18:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:00.886 18:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.886 18:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.886 18:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:00.886 18:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:00.886 18:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:00.886 18:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.886 18:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.886 18:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.886 18:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.886 18:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.886 18:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.886 18:13:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.886 18:13:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.886 18:13:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.886 18:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.886 "name": "Existed_Raid", 00:15:00.886 "uuid": "8e6b5dfc-6e22-49d1-9048-8bde588310c4", 00:15:00.886 "strip_size_kb": 0, 00:15:00.886 "state": "configuring", 00:15:00.886 "raid_level": "raid1", 00:15:00.886 "superblock": true, 00:15:00.886 "num_base_bdevs": 4, 00:15:00.886 "num_base_bdevs_discovered": 2, 00:15:00.886 "num_base_bdevs_operational": 4, 00:15:00.886 "base_bdevs_list": [ 00:15:00.886 { 00:15:00.886 "name": null, 00:15:00.886 "uuid": "04a2cc9e-3d9e-495a-aac3-591d1bfa25e1", 00:15:00.886 "is_configured": false, 00:15:00.886 "data_offset": 0, 00:15:00.886 "data_size": 63488 00:15:00.886 }, 00:15:00.886 { 00:15:00.886 "name": null, 00:15:00.886 "uuid": "1ed8b012-c59d-4c38-893b-4d4b4ddcff2e", 00:15:00.886 "is_configured": false, 00:15:00.886 "data_offset": 0, 00:15:00.886 "data_size": 63488 00:15:00.886 }, 00:15:00.886 { 00:15:00.886 "name": "BaseBdev3", 00:15:00.886 "uuid": "d3919021-d324-4a07-b6c3-181c5a06332d", 00:15:00.886 "is_configured": true, 00:15:00.886 "data_offset": 2048, 00:15:00.886 "data_size": 63488 00:15:00.886 }, 00:15:00.886 { 00:15:00.886 "name": "BaseBdev4", 00:15:00.886 "uuid": "a593ce87-72f3-4f2f-b035-4b4f06490d8a", 00:15:00.886 "is_configured": true, 00:15:00.886 "data_offset": 2048, 00:15:00.886 "data_size": 63488 00:15:00.886 } 00:15:00.886 ] 00:15:00.886 }' 00:15:00.886 18:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.886 18:13:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.452 18:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.452 18:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:01.452 18:13:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.452 18:13:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.452 18:13:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.452 18:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:01.452 18:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:01.452 18:13:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.452 18:13:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.452 [2024-12-06 18:13:26.866372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:01.452 18:13:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.452 18:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:01.452 18:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:01.452 18:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:01.452 18:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:01.452 18:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:01.452 18:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:01.452 18:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.452 18:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.452 18:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.452 18:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.452 18:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.452 18:13:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.452 18:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.453 18:13:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.453 18:13:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.453 18:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.453 "name": "Existed_Raid", 00:15:01.453 "uuid": "8e6b5dfc-6e22-49d1-9048-8bde588310c4", 00:15:01.453 "strip_size_kb": 0, 00:15:01.453 "state": "configuring", 00:15:01.453 "raid_level": "raid1", 00:15:01.453 "superblock": true, 00:15:01.453 "num_base_bdevs": 4, 00:15:01.453 "num_base_bdevs_discovered": 3, 00:15:01.453 "num_base_bdevs_operational": 4, 00:15:01.453 "base_bdevs_list": [ 00:15:01.453 { 00:15:01.453 "name": null, 00:15:01.453 "uuid": "04a2cc9e-3d9e-495a-aac3-591d1bfa25e1", 00:15:01.453 "is_configured": false, 00:15:01.453 "data_offset": 0, 00:15:01.453 "data_size": 63488 00:15:01.453 }, 00:15:01.453 { 00:15:01.453 "name": "BaseBdev2", 00:15:01.453 "uuid": "1ed8b012-c59d-4c38-893b-4d4b4ddcff2e", 00:15:01.453 "is_configured": true, 00:15:01.453 "data_offset": 2048, 00:15:01.453 "data_size": 63488 00:15:01.453 }, 00:15:01.453 { 00:15:01.453 "name": "BaseBdev3", 00:15:01.453 "uuid": "d3919021-d324-4a07-b6c3-181c5a06332d", 00:15:01.453 "is_configured": true, 00:15:01.453 "data_offset": 2048, 00:15:01.453 "data_size": 63488 00:15:01.453 }, 00:15:01.453 { 00:15:01.453 "name": "BaseBdev4", 00:15:01.453 "uuid": "a593ce87-72f3-4f2f-b035-4b4f06490d8a", 00:15:01.453 "is_configured": true, 00:15:01.453 "data_offset": 2048, 00:15:01.453 "data_size": 63488 00:15:01.453 } 00:15:01.453 ] 00:15:01.453 }' 00:15:01.453 18:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.453 18:13:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.020 18:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:02.020 18:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.020 18:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.020 18:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.020 18:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.020 18:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:02.020 18:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.020 18:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.020 18:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.020 18:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:02.020 18:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.020 18:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 04a2cc9e-3d9e-495a-aac3-591d1bfa25e1 00:15:02.020 18:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.020 18:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.020 [2024-12-06 18:13:27.484857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:02.020 NewBaseBdev 00:15:02.020 [2024-12-06 18:13:27.485401] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:02.020 [2024-12-06 18:13:27.485443] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:02.020 [2024-12-06 18:13:27.485795] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:02.020 [2024-12-06 18:13:27.486001] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:02.020 [2024-12-06 18:13:27.486018] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:02.020 [2024-12-06 18:13:27.486184] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.020 18:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.020 18:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:02.020 18:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:02.020 18:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:02.020 18:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:02.020 18:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:02.020 18:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:02.020 18:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:02.020 18:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.020 18:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.020 18:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.020 18:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:02.020 18:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.020 18:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.020 [ 00:15:02.020 { 00:15:02.020 "name": "NewBaseBdev", 00:15:02.020 "aliases": [ 00:15:02.020 "04a2cc9e-3d9e-495a-aac3-591d1bfa25e1" 00:15:02.020 ], 00:15:02.020 "product_name": "Malloc disk", 00:15:02.020 "block_size": 512, 00:15:02.020 "num_blocks": 65536, 00:15:02.020 "uuid": "04a2cc9e-3d9e-495a-aac3-591d1bfa25e1", 00:15:02.020 "assigned_rate_limits": { 00:15:02.020 "rw_ios_per_sec": 0, 00:15:02.020 "rw_mbytes_per_sec": 0, 00:15:02.020 "r_mbytes_per_sec": 0, 00:15:02.020 "w_mbytes_per_sec": 0 00:15:02.020 }, 00:15:02.020 "claimed": true, 00:15:02.020 "claim_type": "exclusive_write", 00:15:02.020 "zoned": false, 00:15:02.020 "supported_io_types": { 00:15:02.020 "read": true, 00:15:02.020 "write": true, 00:15:02.020 "unmap": true, 00:15:02.020 "flush": true, 00:15:02.020 "reset": true, 00:15:02.020 "nvme_admin": false, 00:15:02.020 "nvme_io": false, 00:15:02.020 "nvme_io_md": false, 00:15:02.020 "write_zeroes": true, 00:15:02.020 "zcopy": true, 00:15:02.020 "get_zone_info": false, 00:15:02.020 "zone_management": false, 00:15:02.020 "zone_append": false, 00:15:02.020 "compare": false, 00:15:02.020 "compare_and_write": false, 00:15:02.020 "abort": true, 00:15:02.020 "seek_hole": false, 00:15:02.020 "seek_data": false, 00:15:02.020 "copy": true, 00:15:02.020 "nvme_iov_md": false 00:15:02.020 }, 00:15:02.020 "memory_domains": [ 00:15:02.020 { 00:15:02.020 "dma_device_id": "system", 00:15:02.020 "dma_device_type": 1 00:15:02.020 }, 00:15:02.020 { 00:15:02.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.020 "dma_device_type": 2 00:15:02.020 } 00:15:02.020 ], 00:15:02.020 "driver_specific": {} 00:15:02.020 } 00:15:02.020 ] 00:15:02.020 18:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.020 18:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:02.020 18:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:15:02.020 18:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:02.020 18:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.020 18:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:02.020 18:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:02.020 18:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:02.021 18:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.021 18:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.021 18:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.021 18:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.021 18:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.021 18:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.021 18:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.021 18:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.330 18:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.330 18:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.330 "name": "Existed_Raid", 00:15:02.330 "uuid": "8e6b5dfc-6e22-49d1-9048-8bde588310c4", 00:15:02.330 "strip_size_kb": 0, 00:15:02.330 "state": "online", 00:15:02.330 "raid_level": "raid1", 00:15:02.330 "superblock": true, 00:15:02.330 "num_base_bdevs": 4, 00:15:02.330 "num_base_bdevs_discovered": 4, 00:15:02.330 "num_base_bdevs_operational": 4, 00:15:02.330 "base_bdevs_list": [ 00:15:02.330 { 00:15:02.330 "name": "NewBaseBdev", 00:15:02.330 "uuid": "04a2cc9e-3d9e-495a-aac3-591d1bfa25e1", 00:15:02.330 "is_configured": true, 00:15:02.330 "data_offset": 2048, 00:15:02.330 "data_size": 63488 00:15:02.330 }, 00:15:02.330 { 00:15:02.330 "name": "BaseBdev2", 00:15:02.330 "uuid": "1ed8b012-c59d-4c38-893b-4d4b4ddcff2e", 00:15:02.330 "is_configured": true, 00:15:02.330 "data_offset": 2048, 00:15:02.330 "data_size": 63488 00:15:02.330 }, 00:15:02.330 { 00:15:02.330 "name": "BaseBdev3", 00:15:02.330 "uuid": "d3919021-d324-4a07-b6c3-181c5a06332d", 00:15:02.330 "is_configured": true, 00:15:02.330 "data_offset": 2048, 00:15:02.330 "data_size": 63488 00:15:02.330 }, 00:15:02.330 { 00:15:02.330 "name": "BaseBdev4", 00:15:02.330 "uuid": "a593ce87-72f3-4f2f-b035-4b4f06490d8a", 00:15:02.330 "is_configured": true, 00:15:02.330 "data_offset": 2048, 00:15:02.330 "data_size": 63488 00:15:02.330 } 00:15:02.330 ] 00:15:02.330 }' 00:15:02.330 18:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.330 18:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.590 18:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:02.590 18:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:02.590 18:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:02.590 18:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:02.590 18:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:02.590 18:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:02.590 18:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:02.590 18:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:02.590 18:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.590 18:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.590 [2024-12-06 18:13:28.049469] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:02.590 18:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.590 18:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:02.590 "name": "Existed_Raid", 00:15:02.590 "aliases": [ 00:15:02.590 "8e6b5dfc-6e22-49d1-9048-8bde588310c4" 00:15:02.590 ], 00:15:02.590 "product_name": "Raid Volume", 00:15:02.590 "block_size": 512, 00:15:02.590 "num_blocks": 63488, 00:15:02.590 "uuid": "8e6b5dfc-6e22-49d1-9048-8bde588310c4", 00:15:02.590 "assigned_rate_limits": { 00:15:02.590 "rw_ios_per_sec": 0, 00:15:02.590 "rw_mbytes_per_sec": 0, 00:15:02.590 "r_mbytes_per_sec": 0, 00:15:02.590 "w_mbytes_per_sec": 0 00:15:02.590 }, 00:15:02.590 "claimed": false, 00:15:02.590 "zoned": false, 00:15:02.590 "supported_io_types": { 00:15:02.590 "read": true, 00:15:02.590 "write": true, 00:15:02.590 "unmap": false, 00:15:02.590 "flush": false, 00:15:02.590 "reset": true, 00:15:02.590 "nvme_admin": false, 00:15:02.590 "nvme_io": false, 00:15:02.590 "nvme_io_md": false, 00:15:02.590 "write_zeroes": true, 00:15:02.590 "zcopy": false, 00:15:02.590 "get_zone_info": false, 00:15:02.590 "zone_management": false, 00:15:02.590 "zone_append": false, 00:15:02.590 "compare": false, 00:15:02.590 "compare_and_write": false, 00:15:02.590 "abort": false, 00:15:02.590 "seek_hole": false, 00:15:02.590 "seek_data": false, 00:15:02.590 "copy": false, 00:15:02.590 "nvme_iov_md": false 00:15:02.590 }, 00:15:02.590 "memory_domains": [ 00:15:02.590 { 00:15:02.590 "dma_device_id": "system", 00:15:02.590 "dma_device_type": 1 00:15:02.590 }, 00:15:02.590 { 00:15:02.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.590 "dma_device_type": 2 00:15:02.590 }, 00:15:02.590 { 00:15:02.590 "dma_device_id": "system", 00:15:02.590 "dma_device_type": 1 00:15:02.590 }, 00:15:02.590 { 00:15:02.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.590 "dma_device_type": 2 00:15:02.590 }, 00:15:02.590 { 00:15:02.590 "dma_device_id": "system", 00:15:02.590 "dma_device_type": 1 00:15:02.590 }, 00:15:02.590 { 00:15:02.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.590 "dma_device_type": 2 00:15:02.590 }, 00:15:02.590 { 00:15:02.590 "dma_device_id": "system", 00:15:02.590 "dma_device_type": 1 00:15:02.590 }, 00:15:02.590 { 00:15:02.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.590 "dma_device_type": 2 00:15:02.590 } 00:15:02.590 ], 00:15:02.590 "driver_specific": { 00:15:02.590 "raid": { 00:15:02.590 "uuid": "8e6b5dfc-6e22-49d1-9048-8bde588310c4", 00:15:02.590 "strip_size_kb": 0, 00:15:02.590 "state": "online", 00:15:02.590 "raid_level": "raid1", 00:15:02.590 "superblock": true, 00:15:02.590 "num_base_bdevs": 4, 00:15:02.590 "num_base_bdevs_discovered": 4, 00:15:02.590 "num_base_bdevs_operational": 4, 00:15:02.590 "base_bdevs_list": [ 00:15:02.590 { 00:15:02.590 "name": "NewBaseBdev", 00:15:02.590 "uuid": "04a2cc9e-3d9e-495a-aac3-591d1bfa25e1", 00:15:02.590 "is_configured": true, 00:15:02.590 "data_offset": 2048, 00:15:02.590 "data_size": 63488 00:15:02.590 }, 00:15:02.590 { 00:15:02.590 "name": "BaseBdev2", 00:15:02.590 "uuid": "1ed8b012-c59d-4c38-893b-4d4b4ddcff2e", 00:15:02.590 "is_configured": true, 00:15:02.590 "data_offset": 2048, 00:15:02.590 "data_size": 63488 00:15:02.590 }, 00:15:02.590 { 00:15:02.590 "name": "BaseBdev3", 00:15:02.590 "uuid": "d3919021-d324-4a07-b6c3-181c5a06332d", 00:15:02.590 "is_configured": true, 00:15:02.590 "data_offset": 2048, 00:15:02.590 "data_size": 63488 00:15:02.590 }, 00:15:02.590 { 00:15:02.590 "name": "BaseBdev4", 00:15:02.590 "uuid": "a593ce87-72f3-4f2f-b035-4b4f06490d8a", 00:15:02.590 "is_configured": true, 00:15:02.590 "data_offset": 2048, 00:15:02.590 "data_size": 63488 00:15:02.590 } 00:15:02.590 ] 00:15:02.590 } 00:15:02.590 } 00:15:02.590 }' 00:15:02.590 18:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:02.850 18:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:02.850 BaseBdev2 00:15:02.850 BaseBdev3 00:15:02.850 BaseBdev4' 00:15:02.850 18:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:02.850 18:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:02.850 18:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:02.850 18:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:02.850 18:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.850 18:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.850 18:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:02.850 18:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.850 18:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:02.850 18:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:02.850 18:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:02.850 18:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:02.850 18:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:02.850 18:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.850 18:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.850 18:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.850 18:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:02.850 18:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:02.850 18:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:02.850 18:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:02.850 18:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.850 18:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.850 18:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:02.850 18:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.109 18:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:03.109 18:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:03.109 18:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:03.109 18:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:03.109 18:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.109 18:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.109 18:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.109 18:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.109 18:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:03.109 18:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:03.109 18:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:03.109 18:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.109 18:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.109 [2024-12-06 18:13:28.441511] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:03.109 [2024-12-06 18:13:28.441548] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:03.109 [2024-12-06 18:13:28.441672] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:03.109 [2024-12-06 18:13:28.442075] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:03.109 [2024-12-06 18:13:28.442102] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:03.109 18:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.109 18:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74060 00:15:03.109 18:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 74060 ']' 00:15:03.109 18:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 74060 00:15:03.109 18:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:03.109 18:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:03.109 18:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74060 00:15:03.109 killing process with pid 74060 00:15:03.109 18:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:03.109 18:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:03.109 18:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74060' 00:15:03.109 18:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 74060 00:15:03.109 [2024-12-06 18:13:28.478437] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:03.109 18:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 74060 00:15:03.366 [2024-12-06 18:13:28.840375] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:04.739 ************************************ 00:15:04.739 END TEST raid_state_function_test_sb 00:15:04.739 ************************************ 00:15:04.739 18:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:04.739 00:15:04.739 real 0m12.827s 00:15:04.739 user 0m21.285s 00:15:04.739 sys 0m1.753s 00:15:04.739 18:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:04.739 18:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.739 18:13:29 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:15:04.739 18:13:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:04.739 18:13:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:04.739 18:13:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:04.739 ************************************ 00:15:04.739 START TEST raid_superblock_test 00:15:04.739 ************************************ 00:15:04.739 18:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:15:04.739 18:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:15:04.739 18:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:15:04.739 18:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:04.739 18:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:04.739 18:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:04.739 18:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:04.739 18:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:04.739 18:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:04.739 18:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:04.739 18:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:04.739 18:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:04.739 18:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:04.739 18:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:04.739 18:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:15:04.739 18:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:15:04.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:04.739 18:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74742 00:15:04.739 18:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:04.739 18:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74742 00:15:04.739 18:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74742 ']' 00:15:04.739 18:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:04.739 18:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:04.739 18:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:04.739 18:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:04.739 18:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.739 [2024-12-06 18:13:30.066405] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:15:04.739 [2024-12-06 18:13:30.066869] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74742 ] 00:15:04.997 [2024-12-06 18:13:30.258423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.997 [2024-12-06 18:13:30.414888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.255 [2024-12-06 18:13:30.631315] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:05.255 [2024-12-06 18:13:30.631564] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:05.823 18:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:05.823 18:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:05.823 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:05.823 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:05.823 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:05.823 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:05.823 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:05.823 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:05.823 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:05.823 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:05.823 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:05.823 18:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.823 18:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.823 malloc1 00:15:05.823 18:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.823 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:05.823 18:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.823 18:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.823 [2024-12-06 18:13:31.101734] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:05.823 [2024-12-06 18:13:31.101951] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:05.823 [2024-12-06 18:13:31.102121] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:05.823 [2024-12-06 18:13:31.102239] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:05.823 [2024-12-06 18:13:31.105193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:05.823 [2024-12-06 18:13:31.105241] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:05.823 pt1 00:15:05.823 18:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.823 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:05.823 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:05.823 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:05.823 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:05.823 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:05.823 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:05.823 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:05.823 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:05.823 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:05.823 18:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.823 18:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.823 malloc2 00:15:05.823 18:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.823 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:05.823 18:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.823 18:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.823 [2024-12-06 18:13:31.150382] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:05.823 [2024-12-06 18:13:31.150457] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:05.823 [2024-12-06 18:13:31.150507] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:05.823 [2024-12-06 18:13:31.150525] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:05.823 [2024-12-06 18:13:31.153359] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:05.823 [2024-12-06 18:13:31.153423] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:05.823 pt2 00:15:05.823 18:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.824 malloc3 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.824 [2024-12-06 18:13:31.214955] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:05.824 [2024-12-06 18:13:31.215027] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:05.824 [2024-12-06 18:13:31.215072] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:05.824 [2024-12-06 18:13:31.215088] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:05.824 [2024-12-06 18:13:31.217959] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:05.824 [2024-12-06 18:13:31.218012] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:05.824 pt3 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.824 malloc4 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.824 [2024-12-06 18:13:31.271225] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:05.824 [2024-12-06 18:13:31.271312] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:05.824 [2024-12-06 18:13:31.271347] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:05.824 [2024-12-06 18:13:31.271363] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:05.824 [2024-12-06 18:13:31.274233] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:05.824 [2024-12-06 18:13:31.274280] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:05.824 pt4 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.824 [2024-12-06 18:13:31.283277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:05.824 [2024-12-06 18:13:31.285940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:05.824 [2024-12-06 18:13:31.286038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:05.824 [2024-12-06 18:13:31.286137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:05.824 [2024-12-06 18:13:31.286413] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:05.824 [2024-12-06 18:13:31.286436] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:05.824 [2024-12-06 18:13:31.286961] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:05.824 [2024-12-06 18:13:31.287254] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:05.824 [2024-12-06 18:13:31.287395] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:05.824 [2024-12-06 18:13:31.287826] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.824 18:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.083 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.083 "name": "raid_bdev1", 00:15:06.083 "uuid": "a4e5dbfa-8216-41f0-81c7-a6a73d6340f8", 00:15:06.083 "strip_size_kb": 0, 00:15:06.083 "state": "online", 00:15:06.083 "raid_level": "raid1", 00:15:06.083 "superblock": true, 00:15:06.083 "num_base_bdevs": 4, 00:15:06.083 "num_base_bdevs_discovered": 4, 00:15:06.083 "num_base_bdevs_operational": 4, 00:15:06.083 "base_bdevs_list": [ 00:15:06.083 { 00:15:06.083 "name": "pt1", 00:15:06.083 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:06.083 "is_configured": true, 00:15:06.083 "data_offset": 2048, 00:15:06.083 "data_size": 63488 00:15:06.083 }, 00:15:06.083 { 00:15:06.083 "name": "pt2", 00:15:06.083 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:06.083 "is_configured": true, 00:15:06.083 "data_offset": 2048, 00:15:06.083 "data_size": 63488 00:15:06.083 }, 00:15:06.083 { 00:15:06.083 "name": "pt3", 00:15:06.083 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:06.083 "is_configured": true, 00:15:06.083 "data_offset": 2048, 00:15:06.083 "data_size": 63488 00:15:06.083 }, 00:15:06.083 { 00:15:06.083 "name": "pt4", 00:15:06.083 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:06.083 "is_configured": true, 00:15:06.083 "data_offset": 2048, 00:15:06.083 "data_size": 63488 00:15:06.083 } 00:15:06.083 ] 00:15:06.083 }' 00:15:06.083 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.083 18:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.341 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:06.341 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:06.341 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:06.341 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:06.341 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:06.341 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:06.341 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:06.341 18:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.341 18:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.341 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:06.341 [2024-12-06 18:13:31.800287] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:06.341 18:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.341 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:06.341 "name": "raid_bdev1", 00:15:06.341 "aliases": [ 00:15:06.341 "a4e5dbfa-8216-41f0-81c7-a6a73d6340f8" 00:15:06.341 ], 00:15:06.341 "product_name": "Raid Volume", 00:15:06.341 "block_size": 512, 00:15:06.341 "num_blocks": 63488, 00:15:06.341 "uuid": "a4e5dbfa-8216-41f0-81c7-a6a73d6340f8", 00:15:06.341 "assigned_rate_limits": { 00:15:06.341 "rw_ios_per_sec": 0, 00:15:06.341 "rw_mbytes_per_sec": 0, 00:15:06.341 "r_mbytes_per_sec": 0, 00:15:06.341 "w_mbytes_per_sec": 0 00:15:06.341 }, 00:15:06.341 "claimed": false, 00:15:06.341 "zoned": false, 00:15:06.341 "supported_io_types": { 00:15:06.341 "read": true, 00:15:06.341 "write": true, 00:15:06.341 "unmap": false, 00:15:06.341 "flush": false, 00:15:06.341 "reset": true, 00:15:06.341 "nvme_admin": false, 00:15:06.341 "nvme_io": false, 00:15:06.341 "nvme_io_md": false, 00:15:06.341 "write_zeroes": true, 00:15:06.342 "zcopy": false, 00:15:06.342 "get_zone_info": false, 00:15:06.342 "zone_management": false, 00:15:06.342 "zone_append": false, 00:15:06.342 "compare": false, 00:15:06.342 "compare_and_write": false, 00:15:06.342 "abort": false, 00:15:06.342 "seek_hole": false, 00:15:06.342 "seek_data": false, 00:15:06.342 "copy": false, 00:15:06.342 "nvme_iov_md": false 00:15:06.342 }, 00:15:06.342 "memory_domains": [ 00:15:06.342 { 00:15:06.342 "dma_device_id": "system", 00:15:06.342 "dma_device_type": 1 00:15:06.342 }, 00:15:06.342 { 00:15:06.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.342 "dma_device_type": 2 00:15:06.342 }, 00:15:06.342 { 00:15:06.342 "dma_device_id": "system", 00:15:06.342 "dma_device_type": 1 00:15:06.342 }, 00:15:06.342 { 00:15:06.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.342 "dma_device_type": 2 00:15:06.342 }, 00:15:06.342 { 00:15:06.342 "dma_device_id": "system", 00:15:06.342 "dma_device_type": 1 00:15:06.342 }, 00:15:06.342 { 00:15:06.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.342 "dma_device_type": 2 00:15:06.342 }, 00:15:06.342 { 00:15:06.342 "dma_device_id": "system", 00:15:06.342 "dma_device_type": 1 00:15:06.342 }, 00:15:06.342 { 00:15:06.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.342 "dma_device_type": 2 00:15:06.342 } 00:15:06.342 ], 00:15:06.342 "driver_specific": { 00:15:06.342 "raid": { 00:15:06.342 "uuid": "a4e5dbfa-8216-41f0-81c7-a6a73d6340f8", 00:15:06.342 "strip_size_kb": 0, 00:15:06.342 "state": "online", 00:15:06.342 "raid_level": "raid1", 00:15:06.342 "superblock": true, 00:15:06.342 "num_base_bdevs": 4, 00:15:06.342 "num_base_bdevs_discovered": 4, 00:15:06.342 "num_base_bdevs_operational": 4, 00:15:06.342 "base_bdevs_list": [ 00:15:06.342 { 00:15:06.342 "name": "pt1", 00:15:06.342 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:06.342 "is_configured": true, 00:15:06.342 "data_offset": 2048, 00:15:06.342 "data_size": 63488 00:15:06.342 }, 00:15:06.342 { 00:15:06.342 "name": "pt2", 00:15:06.342 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:06.342 "is_configured": true, 00:15:06.342 "data_offset": 2048, 00:15:06.342 "data_size": 63488 00:15:06.342 }, 00:15:06.342 { 00:15:06.342 "name": "pt3", 00:15:06.342 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:06.342 "is_configured": true, 00:15:06.342 "data_offset": 2048, 00:15:06.342 "data_size": 63488 00:15:06.342 }, 00:15:06.342 { 00:15:06.342 "name": "pt4", 00:15:06.342 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:06.342 "is_configured": true, 00:15:06.342 "data_offset": 2048, 00:15:06.342 "data_size": 63488 00:15:06.342 } 00:15:06.342 ] 00:15:06.342 } 00:15:06.342 } 00:15:06.342 }' 00:15:06.342 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:06.600 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:06.600 pt2 00:15:06.600 pt3 00:15:06.600 pt4' 00:15:06.600 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:06.601 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:06.601 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:06.601 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:06.601 18:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:06.601 18:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.601 18:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.601 18:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.601 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:06.601 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:06.601 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:06.601 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:06.601 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:06.601 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.601 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.601 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.601 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:06.601 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:06.601 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:06.601 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:06.601 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:06.601 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.601 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.601 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.601 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:06.601 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:06.601 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:06.859 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:06.859 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.859 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:06.859 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.859 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.859 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.860 [2024-12-06 18:13:32.176343] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a4e5dbfa-8216-41f0-81c7-a6a73d6340f8 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a4e5dbfa-8216-41f0-81c7-a6a73d6340f8 ']' 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.860 [2024-12-06 18:13:32.223999] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:06.860 [2024-12-06 18:13:32.224034] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:06.860 [2024-12-06 18:13:32.224152] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:06.860 [2024-12-06 18:13:32.224264] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:06.860 [2024-12-06 18:13:32.224289] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.860 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.119 [2024-12-06 18:13:32.384043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:07.119 [2024-12-06 18:13:32.386659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:07.119 [2024-12-06 18:13:32.386880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:07.119 [2024-12-06 18:13:32.387067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:07.119 [2024-12-06 18:13:32.387261] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:07.119 [2024-12-06 18:13:32.387467] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:07.119 [2024-12-06 18:13:32.387656] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:07.119 [2024-12-06 18:13:32.387860] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:07.119 [2024-12-06 18:13:32.388040] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:07.119 [2024-12-06 18:13:32.388173] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:07.119 request: 00:15:07.119 { 00:15:07.119 "name": "raid_bdev1", 00:15:07.119 "raid_level": "raid1", 00:15:07.119 "base_bdevs": [ 00:15:07.119 "malloc1", 00:15:07.119 "malloc2", 00:15:07.119 "malloc3", 00:15:07.119 "malloc4" 00:15:07.119 ], 00:15:07.119 "superblock": false, 00:15:07.119 "method": "bdev_raid_create", 00:15:07.119 "req_id": 1 00:15:07.119 } 00:15:07.119 Got JSON-RPC error response 00:15:07.119 response: 00:15:07.119 { 00:15:07.119 "code": -17, 00:15:07.119 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:07.119 } 00:15:07.119 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:07.119 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:07.119 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:07.119 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:07.119 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:07.119 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:07.119 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.119 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.119 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.119 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.119 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:07.119 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:07.119 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:07.119 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.119 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.119 [2024-12-06 18:13:32.448542] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:07.119 [2024-12-06 18:13:32.448755] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.119 [2024-12-06 18:13:32.448839] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:07.119 [2024-12-06 18:13:32.448963] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.119 [2024-12-06 18:13:32.451866] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.119 [2024-12-06 18:13:32.452044] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:07.119 [2024-12-06 18:13:32.452275] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:07.119 [2024-12-06 18:13:32.452473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:07.119 pt1 00:15:07.119 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.119 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:15:07.119 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:07.119 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:07.119 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:07.119 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:07.119 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:07.119 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.119 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.119 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.119 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.119 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.119 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.119 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.119 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.119 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.119 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.119 "name": "raid_bdev1", 00:15:07.119 "uuid": "a4e5dbfa-8216-41f0-81c7-a6a73d6340f8", 00:15:07.119 "strip_size_kb": 0, 00:15:07.119 "state": "configuring", 00:15:07.119 "raid_level": "raid1", 00:15:07.119 "superblock": true, 00:15:07.119 "num_base_bdevs": 4, 00:15:07.119 "num_base_bdevs_discovered": 1, 00:15:07.119 "num_base_bdevs_operational": 4, 00:15:07.119 "base_bdevs_list": [ 00:15:07.119 { 00:15:07.119 "name": "pt1", 00:15:07.119 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:07.119 "is_configured": true, 00:15:07.119 "data_offset": 2048, 00:15:07.119 "data_size": 63488 00:15:07.119 }, 00:15:07.119 { 00:15:07.119 "name": null, 00:15:07.119 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:07.119 "is_configured": false, 00:15:07.119 "data_offset": 2048, 00:15:07.119 "data_size": 63488 00:15:07.119 }, 00:15:07.119 { 00:15:07.119 "name": null, 00:15:07.119 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:07.119 "is_configured": false, 00:15:07.119 "data_offset": 2048, 00:15:07.119 "data_size": 63488 00:15:07.119 }, 00:15:07.119 { 00:15:07.119 "name": null, 00:15:07.119 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:07.119 "is_configured": false, 00:15:07.119 "data_offset": 2048, 00:15:07.119 "data_size": 63488 00:15:07.119 } 00:15:07.119 ] 00:15:07.119 }' 00:15:07.119 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.119 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.687 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:15:07.687 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:07.687 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.687 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.687 [2024-12-06 18:13:32.929032] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:07.687 [2024-12-06 18:13:32.929123] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.687 [2024-12-06 18:13:32.929157] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:07.687 [2024-12-06 18:13:32.929199] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.687 [2024-12-06 18:13:32.929761] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.687 [2024-12-06 18:13:32.929823] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:07.687 [2024-12-06 18:13:32.929930] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:07.687 [2024-12-06 18:13:32.929970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:07.687 pt2 00:15:07.687 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.687 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:07.687 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.687 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.687 [2024-12-06 18:13:32.936998] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:07.687 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.687 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:15:07.687 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:07.687 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:07.687 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:07.687 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:07.687 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:07.687 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.687 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.687 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.687 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.687 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.687 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.687 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.687 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.687 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.687 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.687 "name": "raid_bdev1", 00:15:07.687 "uuid": "a4e5dbfa-8216-41f0-81c7-a6a73d6340f8", 00:15:07.687 "strip_size_kb": 0, 00:15:07.687 "state": "configuring", 00:15:07.687 "raid_level": "raid1", 00:15:07.687 "superblock": true, 00:15:07.687 "num_base_bdevs": 4, 00:15:07.687 "num_base_bdevs_discovered": 1, 00:15:07.687 "num_base_bdevs_operational": 4, 00:15:07.687 "base_bdevs_list": [ 00:15:07.687 { 00:15:07.687 "name": "pt1", 00:15:07.687 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:07.687 "is_configured": true, 00:15:07.687 "data_offset": 2048, 00:15:07.687 "data_size": 63488 00:15:07.687 }, 00:15:07.687 { 00:15:07.687 "name": null, 00:15:07.687 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:07.687 "is_configured": false, 00:15:07.687 "data_offset": 0, 00:15:07.687 "data_size": 63488 00:15:07.687 }, 00:15:07.687 { 00:15:07.687 "name": null, 00:15:07.687 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:07.687 "is_configured": false, 00:15:07.687 "data_offset": 2048, 00:15:07.687 "data_size": 63488 00:15:07.687 }, 00:15:07.687 { 00:15:07.687 "name": null, 00:15:07.687 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:07.687 "is_configured": false, 00:15:07.687 "data_offset": 2048, 00:15:07.687 "data_size": 63488 00:15:07.688 } 00:15:07.688 ] 00:15:07.688 }' 00:15:07.688 18:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.688 18:13:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.946 18:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:07.946 18:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:07.946 18:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:07.946 18:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.946 18:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.946 [2024-12-06 18:13:33.397224] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:07.946 [2024-12-06 18:13:33.397314] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.946 [2024-12-06 18:13:33.397346] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:07.946 [2024-12-06 18:13:33.397359] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.946 [2024-12-06 18:13:33.397938] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.946 [2024-12-06 18:13:33.397970] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:07.946 [2024-12-06 18:13:33.398074] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:07.946 [2024-12-06 18:13:33.398106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:07.946 pt2 00:15:07.946 18:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.946 18:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:07.946 18:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:07.946 18:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:07.946 18:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.946 18:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.946 [2024-12-06 18:13:33.409143] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:07.946 [2024-12-06 18:13:33.409407] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.946 [2024-12-06 18:13:33.409445] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:07.946 [2024-12-06 18:13:33.409459] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.946 [2024-12-06 18:13:33.409956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.946 [2024-12-06 18:13:33.409991] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:07.946 [2024-12-06 18:13:33.410076] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:07.946 [2024-12-06 18:13:33.410104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:07.946 pt3 00:15:07.946 18:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.946 18:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:07.946 18:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:07.946 18:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:07.946 18:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.946 18:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.946 [2024-12-06 18:13:33.421117] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:07.946 [2024-12-06 18:13:33.421199] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.946 [2024-12-06 18:13:33.421225] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:07.946 [2024-12-06 18:13:33.421238] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.946 [2024-12-06 18:13:33.421689] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.946 [2024-12-06 18:13:33.421718] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:07.946 [2024-12-06 18:13:33.421841] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:07.946 [2024-12-06 18:13:33.421876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:07.946 [2024-12-06 18:13:33.422069] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:07.946 [2024-12-06 18:13:33.422105] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:07.946 [2024-12-06 18:13:33.422418] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:07.946 [2024-12-06 18:13:33.422652] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:07.946 [2024-12-06 18:13:33.422674] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:07.946 [2024-12-06 18:13:33.422856] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:07.946 pt4 00:15:07.947 18:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.947 18:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:07.947 18:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:07.947 18:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:07.947 18:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:07.947 18:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:07.947 18:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:07.947 18:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:07.947 18:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:07.947 18:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.947 18:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.947 18:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.947 18:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.947 18:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.947 18:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.947 18:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.947 18:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.947 18:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.205 18:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.205 "name": "raid_bdev1", 00:15:08.205 "uuid": "a4e5dbfa-8216-41f0-81c7-a6a73d6340f8", 00:15:08.205 "strip_size_kb": 0, 00:15:08.205 "state": "online", 00:15:08.205 "raid_level": "raid1", 00:15:08.205 "superblock": true, 00:15:08.205 "num_base_bdevs": 4, 00:15:08.205 "num_base_bdevs_discovered": 4, 00:15:08.205 "num_base_bdevs_operational": 4, 00:15:08.205 "base_bdevs_list": [ 00:15:08.205 { 00:15:08.205 "name": "pt1", 00:15:08.205 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:08.205 "is_configured": true, 00:15:08.205 "data_offset": 2048, 00:15:08.205 "data_size": 63488 00:15:08.205 }, 00:15:08.205 { 00:15:08.205 "name": "pt2", 00:15:08.205 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:08.205 "is_configured": true, 00:15:08.205 "data_offset": 2048, 00:15:08.205 "data_size": 63488 00:15:08.205 }, 00:15:08.205 { 00:15:08.205 "name": "pt3", 00:15:08.205 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:08.205 "is_configured": true, 00:15:08.205 "data_offset": 2048, 00:15:08.205 "data_size": 63488 00:15:08.205 }, 00:15:08.205 { 00:15:08.205 "name": "pt4", 00:15:08.205 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:08.205 "is_configured": true, 00:15:08.205 "data_offset": 2048, 00:15:08.205 "data_size": 63488 00:15:08.205 } 00:15:08.205 ] 00:15:08.205 }' 00:15:08.205 18:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.205 18:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.464 18:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:08.464 18:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:08.464 18:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:08.464 18:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:08.464 18:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:08.464 18:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:08.464 18:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:08.464 18:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:08.464 18:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.464 18:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.464 [2024-12-06 18:13:33.869743] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:08.464 18:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.464 18:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:08.464 "name": "raid_bdev1", 00:15:08.464 "aliases": [ 00:15:08.464 "a4e5dbfa-8216-41f0-81c7-a6a73d6340f8" 00:15:08.464 ], 00:15:08.464 "product_name": "Raid Volume", 00:15:08.464 "block_size": 512, 00:15:08.464 "num_blocks": 63488, 00:15:08.464 "uuid": "a4e5dbfa-8216-41f0-81c7-a6a73d6340f8", 00:15:08.464 "assigned_rate_limits": { 00:15:08.464 "rw_ios_per_sec": 0, 00:15:08.464 "rw_mbytes_per_sec": 0, 00:15:08.464 "r_mbytes_per_sec": 0, 00:15:08.464 "w_mbytes_per_sec": 0 00:15:08.464 }, 00:15:08.464 "claimed": false, 00:15:08.464 "zoned": false, 00:15:08.464 "supported_io_types": { 00:15:08.464 "read": true, 00:15:08.464 "write": true, 00:15:08.464 "unmap": false, 00:15:08.464 "flush": false, 00:15:08.464 "reset": true, 00:15:08.464 "nvme_admin": false, 00:15:08.464 "nvme_io": false, 00:15:08.464 "nvme_io_md": false, 00:15:08.464 "write_zeroes": true, 00:15:08.464 "zcopy": false, 00:15:08.464 "get_zone_info": false, 00:15:08.464 "zone_management": false, 00:15:08.464 "zone_append": false, 00:15:08.464 "compare": false, 00:15:08.464 "compare_and_write": false, 00:15:08.464 "abort": false, 00:15:08.464 "seek_hole": false, 00:15:08.464 "seek_data": false, 00:15:08.464 "copy": false, 00:15:08.464 "nvme_iov_md": false 00:15:08.464 }, 00:15:08.464 "memory_domains": [ 00:15:08.464 { 00:15:08.464 "dma_device_id": "system", 00:15:08.464 "dma_device_type": 1 00:15:08.464 }, 00:15:08.464 { 00:15:08.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.464 "dma_device_type": 2 00:15:08.464 }, 00:15:08.464 { 00:15:08.464 "dma_device_id": "system", 00:15:08.464 "dma_device_type": 1 00:15:08.464 }, 00:15:08.464 { 00:15:08.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.464 "dma_device_type": 2 00:15:08.464 }, 00:15:08.464 { 00:15:08.464 "dma_device_id": "system", 00:15:08.464 "dma_device_type": 1 00:15:08.464 }, 00:15:08.464 { 00:15:08.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.464 "dma_device_type": 2 00:15:08.464 }, 00:15:08.464 { 00:15:08.464 "dma_device_id": "system", 00:15:08.464 "dma_device_type": 1 00:15:08.464 }, 00:15:08.464 { 00:15:08.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.464 "dma_device_type": 2 00:15:08.464 } 00:15:08.464 ], 00:15:08.464 "driver_specific": { 00:15:08.464 "raid": { 00:15:08.464 "uuid": "a4e5dbfa-8216-41f0-81c7-a6a73d6340f8", 00:15:08.464 "strip_size_kb": 0, 00:15:08.464 "state": "online", 00:15:08.464 "raid_level": "raid1", 00:15:08.464 "superblock": true, 00:15:08.464 "num_base_bdevs": 4, 00:15:08.464 "num_base_bdevs_discovered": 4, 00:15:08.464 "num_base_bdevs_operational": 4, 00:15:08.464 "base_bdevs_list": [ 00:15:08.464 { 00:15:08.464 "name": "pt1", 00:15:08.464 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:08.464 "is_configured": true, 00:15:08.464 "data_offset": 2048, 00:15:08.464 "data_size": 63488 00:15:08.464 }, 00:15:08.464 { 00:15:08.464 "name": "pt2", 00:15:08.464 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:08.464 "is_configured": true, 00:15:08.464 "data_offset": 2048, 00:15:08.464 "data_size": 63488 00:15:08.464 }, 00:15:08.464 { 00:15:08.464 "name": "pt3", 00:15:08.464 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:08.464 "is_configured": true, 00:15:08.464 "data_offset": 2048, 00:15:08.464 "data_size": 63488 00:15:08.464 }, 00:15:08.464 { 00:15:08.464 "name": "pt4", 00:15:08.464 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:08.464 "is_configured": true, 00:15:08.464 "data_offset": 2048, 00:15:08.464 "data_size": 63488 00:15:08.464 } 00:15:08.464 ] 00:15:08.464 } 00:15:08.464 } 00:15:08.464 }' 00:15:08.464 18:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:08.464 18:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:08.464 pt2 00:15:08.464 pt3 00:15:08.464 pt4' 00:15:08.464 18:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:08.723 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:08.723 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:08.723 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:08.723 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:08.723 18:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.723 18:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.723 18:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.723 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:08.723 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:08.723 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:08.723 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:08.723 18:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.723 18:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.723 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:08.723 18:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.723 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:08.723 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:08.723 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:08.723 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:08.723 18:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.723 18:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.723 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:08.724 18:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.724 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:08.724 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:08.724 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:08.724 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:08.724 18:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.724 18:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.724 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:08.724 18:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.724 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:08.724 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:08.724 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:08.724 18:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.724 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:08.724 18:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.724 [2024-12-06 18:13:34.233828] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:08.982 18:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.982 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a4e5dbfa-8216-41f0-81c7-a6a73d6340f8 '!=' a4e5dbfa-8216-41f0-81c7-a6a73d6340f8 ']' 00:15:08.982 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:15:08.982 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:08.982 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:08.982 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:08.982 18:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.982 18:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.982 [2024-12-06 18:13:34.277557] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:08.982 18:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.982 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:08.982 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:08.982 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:08.982 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:08.982 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:08.982 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:08.982 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.982 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.982 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.982 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.982 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.982 18:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.982 18:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.982 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.982 18:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.982 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.982 "name": "raid_bdev1", 00:15:08.982 "uuid": "a4e5dbfa-8216-41f0-81c7-a6a73d6340f8", 00:15:08.982 "strip_size_kb": 0, 00:15:08.982 "state": "online", 00:15:08.982 "raid_level": "raid1", 00:15:08.982 "superblock": true, 00:15:08.982 "num_base_bdevs": 4, 00:15:08.982 "num_base_bdevs_discovered": 3, 00:15:08.982 "num_base_bdevs_operational": 3, 00:15:08.982 "base_bdevs_list": [ 00:15:08.982 { 00:15:08.982 "name": null, 00:15:08.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.982 "is_configured": false, 00:15:08.982 "data_offset": 0, 00:15:08.982 "data_size": 63488 00:15:08.982 }, 00:15:08.982 { 00:15:08.982 "name": "pt2", 00:15:08.982 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:08.982 "is_configured": true, 00:15:08.982 "data_offset": 2048, 00:15:08.982 "data_size": 63488 00:15:08.982 }, 00:15:08.982 { 00:15:08.982 "name": "pt3", 00:15:08.982 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:08.982 "is_configured": true, 00:15:08.982 "data_offset": 2048, 00:15:08.982 "data_size": 63488 00:15:08.982 }, 00:15:08.982 { 00:15:08.982 "name": "pt4", 00:15:08.982 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:08.982 "is_configured": true, 00:15:08.982 "data_offset": 2048, 00:15:08.982 "data_size": 63488 00:15:08.982 } 00:15:08.982 ] 00:15:08.982 }' 00:15:08.982 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.982 18:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.550 [2024-12-06 18:13:34.785647] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:09.550 [2024-12-06 18:13:34.785683] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:09.550 [2024-12-06 18:13:34.785776] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:09.550 [2024-12-06 18:13:34.785930] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:09.550 [2024-12-06 18:13:34.785949] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.550 [2024-12-06 18:13:34.873641] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:09.550 [2024-12-06 18:13:34.873703] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.550 [2024-12-06 18:13:34.873732] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:09.550 [2024-12-06 18:13:34.873747] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.550 [2024-12-06 18:13:34.876580] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.550 [2024-12-06 18:13:34.876623] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:09.550 [2024-12-06 18:13:34.876728] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:09.550 [2024-12-06 18:13:34.876813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:09.550 pt2 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.550 "name": "raid_bdev1", 00:15:09.550 "uuid": "a4e5dbfa-8216-41f0-81c7-a6a73d6340f8", 00:15:09.550 "strip_size_kb": 0, 00:15:09.550 "state": "configuring", 00:15:09.550 "raid_level": "raid1", 00:15:09.550 "superblock": true, 00:15:09.550 "num_base_bdevs": 4, 00:15:09.550 "num_base_bdevs_discovered": 1, 00:15:09.550 "num_base_bdevs_operational": 3, 00:15:09.550 "base_bdevs_list": [ 00:15:09.550 { 00:15:09.550 "name": null, 00:15:09.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.550 "is_configured": false, 00:15:09.550 "data_offset": 2048, 00:15:09.550 "data_size": 63488 00:15:09.550 }, 00:15:09.550 { 00:15:09.550 "name": "pt2", 00:15:09.550 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:09.550 "is_configured": true, 00:15:09.550 "data_offset": 2048, 00:15:09.550 "data_size": 63488 00:15:09.550 }, 00:15:09.550 { 00:15:09.550 "name": null, 00:15:09.550 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:09.550 "is_configured": false, 00:15:09.550 "data_offset": 2048, 00:15:09.550 "data_size": 63488 00:15:09.550 }, 00:15:09.550 { 00:15:09.550 "name": null, 00:15:09.550 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:09.550 "is_configured": false, 00:15:09.550 "data_offset": 2048, 00:15:09.550 "data_size": 63488 00:15:09.550 } 00:15:09.550 ] 00:15:09.550 }' 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.550 18:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.201 18:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:10.201 18:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:10.201 18:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:10.201 18:13:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.201 18:13:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.201 [2024-12-06 18:13:35.361842] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:10.201 [2024-12-06 18:13:35.361943] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.201 [2024-12-06 18:13:35.361978] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:10.201 [2024-12-06 18:13:35.361993] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.201 [2024-12-06 18:13:35.362622] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.201 [2024-12-06 18:13:35.362650] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:10.201 [2024-12-06 18:13:35.362756] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:10.201 [2024-12-06 18:13:35.362807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:10.201 pt3 00:15:10.201 18:13:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.201 18:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:15:10.201 18:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.201 18:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:10.201 18:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:10.201 18:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:10.201 18:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:10.201 18:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.201 18:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.201 18:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.201 18:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.201 18:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.201 18:13:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.201 18:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.201 18:13:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.201 18:13:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.201 18:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.201 "name": "raid_bdev1", 00:15:10.201 "uuid": "a4e5dbfa-8216-41f0-81c7-a6a73d6340f8", 00:15:10.201 "strip_size_kb": 0, 00:15:10.201 "state": "configuring", 00:15:10.201 "raid_level": "raid1", 00:15:10.201 "superblock": true, 00:15:10.201 "num_base_bdevs": 4, 00:15:10.201 "num_base_bdevs_discovered": 2, 00:15:10.201 "num_base_bdevs_operational": 3, 00:15:10.201 "base_bdevs_list": [ 00:15:10.201 { 00:15:10.201 "name": null, 00:15:10.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.201 "is_configured": false, 00:15:10.201 "data_offset": 2048, 00:15:10.201 "data_size": 63488 00:15:10.201 }, 00:15:10.201 { 00:15:10.201 "name": "pt2", 00:15:10.201 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:10.201 "is_configured": true, 00:15:10.201 "data_offset": 2048, 00:15:10.201 "data_size": 63488 00:15:10.201 }, 00:15:10.201 { 00:15:10.201 "name": "pt3", 00:15:10.201 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:10.201 "is_configured": true, 00:15:10.201 "data_offset": 2048, 00:15:10.201 "data_size": 63488 00:15:10.201 }, 00:15:10.201 { 00:15:10.201 "name": null, 00:15:10.201 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:10.201 "is_configured": false, 00:15:10.201 "data_offset": 2048, 00:15:10.201 "data_size": 63488 00:15:10.201 } 00:15:10.201 ] 00:15:10.201 }' 00:15:10.201 18:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.201 18:13:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.459 18:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:10.459 18:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:10.459 18:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:15:10.459 18:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:10.459 18:13:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.459 18:13:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.459 [2024-12-06 18:13:35.882075] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:10.459 [2024-12-06 18:13:35.882206] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.459 [2024-12-06 18:13:35.882243] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:10.459 [2024-12-06 18:13:35.882258] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.459 [2024-12-06 18:13:35.882819] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.459 [2024-12-06 18:13:35.882846] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:10.459 [2024-12-06 18:13:35.882949] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:10.459 [2024-12-06 18:13:35.882981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:10.459 [2024-12-06 18:13:35.883189] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:10.459 [2024-12-06 18:13:35.883204] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:10.459 [2024-12-06 18:13:35.883510] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:10.459 [2024-12-06 18:13:35.883746] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:10.459 [2024-12-06 18:13:35.883790] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:10.459 [2024-12-06 18:13:35.883993] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:10.459 pt4 00:15:10.459 18:13:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.459 18:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:10.459 18:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.459 18:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:10.459 18:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:10.459 18:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:10.459 18:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:10.459 18:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.459 18:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.459 18:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.459 18:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.459 18:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.459 18:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.459 18:13:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.459 18:13:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.459 18:13:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.459 18:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.459 "name": "raid_bdev1", 00:15:10.459 "uuid": "a4e5dbfa-8216-41f0-81c7-a6a73d6340f8", 00:15:10.459 "strip_size_kb": 0, 00:15:10.459 "state": "online", 00:15:10.459 "raid_level": "raid1", 00:15:10.459 "superblock": true, 00:15:10.459 "num_base_bdevs": 4, 00:15:10.459 "num_base_bdevs_discovered": 3, 00:15:10.459 "num_base_bdevs_operational": 3, 00:15:10.459 "base_bdevs_list": [ 00:15:10.459 { 00:15:10.459 "name": null, 00:15:10.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.459 "is_configured": false, 00:15:10.459 "data_offset": 2048, 00:15:10.459 "data_size": 63488 00:15:10.459 }, 00:15:10.459 { 00:15:10.459 "name": "pt2", 00:15:10.459 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:10.459 "is_configured": true, 00:15:10.459 "data_offset": 2048, 00:15:10.459 "data_size": 63488 00:15:10.459 }, 00:15:10.459 { 00:15:10.459 "name": "pt3", 00:15:10.459 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:10.459 "is_configured": true, 00:15:10.459 "data_offset": 2048, 00:15:10.459 "data_size": 63488 00:15:10.459 }, 00:15:10.459 { 00:15:10.459 "name": "pt4", 00:15:10.459 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:10.459 "is_configured": true, 00:15:10.459 "data_offset": 2048, 00:15:10.459 "data_size": 63488 00:15:10.459 } 00:15:10.459 ] 00:15:10.459 }' 00:15:10.459 18:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.459 18:13:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.024 18:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:11.024 18:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.024 18:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.024 [2024-12-06 18:13:36.430276] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:11.024 [2024-12-06 18:13:36.430459] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:11.024 [2024-12-06 18:13:36.430689] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:11.024 [2024-12-06 18:13:36.430929] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:11.024 [2024-12-06 18:13:36.431088] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:11.024 18:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.024 18:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.024 18:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.024 18:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:11.024 18:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.024 18:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.024 18:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:11.024 18:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:11.024 18:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:15:11.024 18:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:15:11.024 18:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:15:11.024 18:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.024 18:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.024 18:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.024 18:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:11.024 18:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.024 18:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.024 [2024-12-06 18:13:36.494331] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:11.024 [2024-12-06 18:13:36.494417] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:11.024 [2024-12-06 18:13:36.494443] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:15:11.024 [2024-12-06 18:13:36.494462] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:11.024 [2024-12-06 18:13:36.497571] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:11.024 [2024-12-06 18:13:36.497635] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:11.024 [2024-12-06 18:13:36.497731] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:11.024 [2024-12-06 18:13:36.497839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:11.024 [2024-12-06 18:13:36.498026] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:11.024 [2024-12-06 18:13:36.498051] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:11.024 [2024-12-06 18:13:36.498072] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:11.024 [2024-12-06 18:13:36.498178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:11.024 [2024-12-06 18:13:36.498371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:11.024 pt1 00:15:11.024 18:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.024 18:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:15:11.024 18:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:15:11.024 18:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.024 18:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:11.024 18:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:11.024 18:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:11.024 18:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:11.024 18:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.024 18:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.024 18:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.024 18:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.024 18:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.024 18:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.024 18:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.024 18:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.024 18:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.283 18:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.283 "name": "raid_bdev1", 00:15:11.283 "uuid": "a4e5dbfa-8216-41f0-81c7-a6a73d6340f8", 00:15:11.283 "strip_size_kb": 0, 00:15:11.283 "state": "configuring", 00:15:11.283 "raid_level": "raid1", 00:15:11.283 "superblock": true, 00:15:11.283 "num_base_bdevs": 4, 00:15:11.283 "num_base_bdevs_discovered": 2, 00:15:11.283 "num_base_bdevs_operational": 3, 00:15:11.283 "base_bdevs_list": [ 00:15:11.283 { 00:15:11.283 "name": null, 00:15:11.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.283 "is_configured": false, 00:15:11.283 "data_offset": 2048, 00:15:11.283 "data_size": 63488 00:15:11.283 }, 00:15:11.283 { 00:15:11.283 "name": "pt2", 00:15:11.283 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:11.283 "is_configured": true, 00:15:11.283 "data_offset": 2048, 00:15:11.283 "data_size": 63488 00:15:11.283 }, 00:15:11.283 { 00:15:11.283 "name": "pt3", 00:15:11.283 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:11.283 "is_configured": true, 00:15:11.283 "data_offset": 2048, 00:15:11.283 "data_size": 63488 00:15:11.283 }, 00:15:11.283 { 00:15:11.283 "name": null, 00:15:11.283 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:11.283 "is_configured": false, 00:15:11.283 "data_offset": 2048, 00:15:11.283 "data_size": 63488 00:15:11.283 } 00:15:11.283 ] 00:15:11.283 }' 00:15:11.283 18:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.283 18:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.541 18:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:11.541 18:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.541 18:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.541 18:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:11.541 18:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.541 18:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:11.541 18:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:11.541 18:13:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.541 18:13:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.541 [2024-12-06 18:13:37.038588] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:11.541 [2024-12-06 18:13:37.038689] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:11.541 [2024-12-06 18:13:37.038725] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:15:11.541 [2024-12-06 18:13:37.038741] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:11.541 [2024-12-06 18:13:37.039313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:11.541 [2024-12-06 18:13:37.039351] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:11.541 [2024-12-06 18:13:37.039452] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:11.541 [2024-12-06 18:13:37.039532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:11.542 [2024-12-06 18:13:37.039700] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:11.542 [2024-12-06 18:13:37.039723] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:11.542 [2024-12-06 18:13:37.040062] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:11.542 [2024-12-06 18:13:37.040282] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:11.542 [2024-12-06 18:13:37.040333] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:11.542 [2024-12-06 18:13:37.040491] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:11.542 pt4 00:15:11.542 18:13:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.542 18:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:11.542 18:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.542 18:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:11.542 18:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:11.542 18:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:11.542 18:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:11.542 18:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.542 18:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.542 18:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.542 18:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.542 18:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.542 18:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.542 18:13:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.542 18:13:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.800 18:13:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.800 18:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.800 "name": "raid_bdev1", 00:15:11.800 "uuid": "a4e5dbfa-8216-41f0-81c7-a6a73d6340f8", 00:15:11.800 "strip_size_kb": 0, 00:15:11.800 "state": "online", 00:15:11.800 "raid_level": "raid1", 00:15:11.800 "superblock": true, 00:15:11.800 "num_base_bdevs": 4, 00:15:11.800 "num_base_bdevs_discovered": 3, 00:15:11.800 "num_base_bdevs_operational": 3, 00:15:11.800 "base_bdevs_list": [ 00:15:11.800 { 00:15:11.800 "name": null, 00:15:11.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.800 "is_configured": false, 00:15:11.800 "data_offset": 2048, 00:15:11.800 "data_size": 63488 00:15:11.800 }, 00:15:11.800 { 00:15:11.800 "name": "pt2", 00:15:11.800 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:11.800 "is_configured": true, 00:15:11.800 "data_offset": 2048, 00:15:11.800 "data_size": 63488 00:15:11.800 }, 00:15:11.800 { 00:15:11.800 "name": "pt3", 00:15:11.800 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:11.800 "is_configured": true, 00:15:11.800 "data_offset": 2048, 00:15:11.800 "data_size": 63488 00:15:11.800 }, 00:15:11.800 { 00:15:11.800 "name": "pt4", 00:15:11.800 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:11.800 "is_configured": true, 00:15:11.800 "data_offset": 2048, 00:15:11.800 "data_size": 63488 00:15:11.800 } 00:15:11.800 ] 00:15:11.800 }' 00:15:11.800 18:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.800 18:13:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.059 18:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:12.059 18:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:12.059 18:13:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.059 18:13:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.059 18:13:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.059 18:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:12.318 18:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:12.318 18:13:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.318 18:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:12.318 18:13:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.318 [2024-12-06 18:13:37.583138] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:12.318 18:13:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.318 18:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' a4e5dbfa-8216-41f0-81c7-a6a73d6340f8 '!=' a4e5dbfa-8216-41f0-81c7-a6a73d6340f8 ']' 00:15:12.318 18:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74742 00:15:12.318 18:13:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74742 ']' 00:15:12.318 18:13:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74742 00:15:12.318 18:13:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:12.318 18:13:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:12.318 18:13:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74742 00:15:12.318 killing process with pid 74742 00:15:12.318 18:13:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:12.318 18:13:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:12.318 18:13:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74742' 00:15:12.318 18:13:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74742 00:15:12.318 [2024-12-06 18:13:37.657886] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:12.318 18:13:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74742 00:15:12.318 [2024-12-06 18:13:37.658000] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:12.318 [2024-12-06 18:13:37.658102] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:12.318 [2024-12-06 18:13:37.658123] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:12.577 [2024-12-06 18:13:38.007369] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:14.031 18:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:14.031 00:15:14.031 real 0m9.100s 00:15:14.031 user 0m14.853s 00:15:14.031 sys 0m1.353s 00:15:14.031 18:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:14.031 18:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.031 ************************************ 00:15:14.031 END TEST raid_superblock_test 00:15:14.031 ************************************ 00:15:14.031 18:13:39 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:15:14.031 18:13:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:14.031 18:13:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:14.031 18:13:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:14.031 ************************************ 00:15:14.031 START TEST raid_read_error_test 00:15:14.031 ************************************ 00:15:14.031 18:13:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:15:14.031 18:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:15:14.031 18:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:15:14.031 18:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:15:14.031 18:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:14.031 18:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:14.031 18:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:14.031 18:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:14.031 18:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:14.031 18:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:14.031 18:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:14.031 18:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:14.031 18:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:14.031 18:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:14.031 18:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:14.031 18:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:15:14.031 18:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:14.031 18:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:14.031 18:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:14.031 18:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:14.031 18:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:14.031 18:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:14.031 18:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:14.032 18:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:14.032 18:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:14.032 18:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:15:14.032 18:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:15:14.032 18:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:14.032 18:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.bUEOc3pj3g 00:15:14.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.032 18:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75235 00:15:14.032 18:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75235 00:15:14.032 18:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:14.032 18:13:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75235 ']' 00:15:14.032 18:13:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.032 18:13:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:14.032 18:13:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.032 18:13:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:14.032 18:13:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.032 [2024-12-06 18:13:39.197306] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:15:14.032 [2024-12-06 18:13:39.197465] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75235 ] 00:15:14.032 [2024-12-06 18:13:39.372206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.032 [2024-12-06 18:13:39.500041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.290 [2024-12-06 18:13:39.701857] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:14.290 [2024-12-06 18:13:39.702111] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:14.857 18:13:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:14.857 18:13:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:15:14.857 18:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:14.857 18:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:14.857 18:13:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.857 18:13:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.857 BaseBdev1_malloc 00:15:14.857 18:13:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.857 18:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:14.857 18:13:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.857 18:13:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.857 true 00:15:14.857 18:13:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.857 18:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:14.857 18:13:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.857 18:13:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.857 [2024-12-06 18:13:40.254266] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:14.858 [2024-12-06 18:13:40.254476] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.858 [2024-12-06 18:13:40.254518] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:14.858 [2024-12-06 18:13:40.254537] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.858 [2024-12-06 18:13:40.257351] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.858 [2024-12-06 18:13:40.257545] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:14.858 BaseBdev1 00:15:14.858 18:13:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.858 18:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:14.858 18:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:14.858 18:13:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.858 18:13:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.858 BaseBdev2_malloc 00:15:14.858 18:13:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.858 18:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:14.858 18:13:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.858 18:13:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.858 true 00:15:14.858 18:13:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.858 18:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:14.858 18:13:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.858 18:13:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.858 [2024-12-06 18:13:40.314527] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:14.858 [2024-12-06 18:13:40.314734] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.858 [2024-12-06 18:13:40.314821] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:14.858 [2024-12-06 18:13:40.314939] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.858 [2024-12-06 18:13:40.317758] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.858 [2024-12-06 18:13:40.317940] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:14.858 BaseBdev2 00:15:14.858 18:13:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.858 18:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:14.858 18:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:14.858 18:13:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.858 18:13:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.858 BaseBdev3_malloc 00:15:14.858 18:13:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.858 18:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:14.858 18:13:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.858 18:13:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.118 true 00:15:15.118 18:13:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.118 18:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:15.118 18:13:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.118 18:13:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.118 [2024-12-06 18:13:40.385669] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:15.118 [2024-12-06 18:13:40.385752] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.118 [2024-12-06 18:13:40.385804] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:15.118 [2024-12-06 18:13:40.385838] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.118 [2024-12-06 18:13:40.388656] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.118 [2024-12-06 18:13:40.388849] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:15.118 BaseBdev3 00:15:15.118 18:13:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.118 18:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:15.118 18:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:15.118 18:13:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.118 18:13:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.118 BaseBdev4_malloc 00:15:15.118 18:13:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.119 18:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:15:15.119 18:13:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.119 18:13:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.119 true 00:15:15.119 18:13:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.119 18:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:15.119 18:13:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.119 18:13:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.119 [2024-12-06 18:13:40.445626] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:15.119 [2024-12-06 18:13:40.445698] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.119 [2024-12-06 18:13:40.445727] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:15.119 [2024-12-06 18:13:40.445746] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.119 [2024-12-06 18:13:40.448568] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.119 [2024-12-06 18:13:40.448748] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:15.119 BaseBdev4 00:15:15.119 18:13:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.119 18:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:15:15.119 18:13:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.119 18:13:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.119 [2024-12-06 18:13:40.453699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:15.119 [2024-12-06 18:13:40.456161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:15.119 [2024-12-06 18:13:40.456265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:15.119 [2024-12-06 18:13:40.456362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:15.119 [2024-12-06 18:13:40.456691] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:15:15.119 [2024-12-06 18:13:40.456721] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:15.119 [2024-12-06 18:13:40.457074] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:15:15.119 [2024-12-06 18:13:40.457319] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:15:15.119 [2024-12-06 18:13:40.457342] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:15:15.119 [2024-12-06 18:13:40.457592] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:15.119 18:13:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.119 18:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:15.119 18:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:15.119 18:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:15.119 18:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:15.119 18:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:15.119 18:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:15.119 18:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.119 18:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.119 18:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.119 18:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.119 18:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.119 18:13:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.119 18:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.119 18:13:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.119 18:13:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.119 18:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.119 "name": "raid_bdev1", 00:15:15.119 "uuid": "10d4dc10-b71e-46ef-9445-2cbd74965458", 00:15:15.119 "strip_size_kb": 0, 00:15:15.119 "state": "online", 00:15:15.119 "raid_level": "raid1", 00:15:15.119 "superblock": true, 00:15:15.119 "num_base_bdevs": 4, 00:15:15.119 "num_base_bdevs_discovered": 4, 00:15:15.119 "num_base_bdevs_operational": 4, 00:15:15.119 "base_bdevs_list": [ 00:15:15.119 { 00:15:15.119 "name": "BaseBdev1", 00:15:15.119 "uuid": "a14eb710-3a5c-5d08-9992-2c767072d3ab", 00:15:15.119 "is_configured": true, 00:15:15.119 "data_offset": 2048, 00:15:15.119 "data_size": 63488 00:15:15.119 }, 00:15:15.119 { 00:15:15.119 "name": "BaseBdev2", 00:15:15.119 "uuid": "e1335cd4-c44e-5b78-bf2c-769465fb28f5", 00:15:15.119 "is_configured": true, 00:15:15.119 "data_offset": 2048, 00:15:15.119 "data_size": 63488 00:15:15.119 }, 00:15:15.119 { 00:15:15.119 "name": "BaseBdev3", 00:15:15.119 "uuid": "503bd7db-4141-578a-9d53-7c4ccfe5bef9", 00:15:15.119 "is_configured": true, 00:15:15.119 "data_offset": 2048, 00:15:15.119 "data_size": 63488 00:15:15.119 }, 00:15:15.119 { 00:15:15.119 "name": "BaseBdev4", 00:15:15.119 "uuid": "446fe1d6-06c7-5983-ac70-d3a048cea7de", 00:15:15.119 "is_configured": true, 00:15:15.119 "data_offset": 2048, 00:15:15.119 "data_size": 63488 00:15:15.119 } 00:15:15.119 ] 00:15:15.119 }' 00:15:15.119 18:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.119 18:13:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.686 18:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:15.686 18:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:15.686 [2024-12-06 18:13:41.059263] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:15:16.625 18:13:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:16.625 18:13:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.625 18:13:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.625 18:13:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.625 18:13:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:16.625 18:13:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:15:16.625 18:13:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:15:16.625 18:13:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:15:16.625 18:13:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:16.625 18:13:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:16.625 18:13:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:16.625 18:13:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:16.625 18:13:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:16.625 18:13:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:16.625 18:13:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.625 18:13:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.625 18:13:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.625 18:13:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.625 18:13:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.625 18:13:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.625 18:13:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.625 18:13:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.625 18:13:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.625 18:13:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.625 "name": "raid_bdev1", 00:15:16.625 "uuid": "10d4dc10-b71e-46ef-9445-2cbd74965458", 00:15:16.625 "strip_size_kb": 0, 00:15:16.625 "state": "online", 00:15:16.625 "raid_level": "raid1", 00:15:16.625 "superblock": true, 00:15:16.625 "num_base_bdevs": 4, 00:15:16.625 "num_base_bdevs_discovered": 4, 00:15:16.625 "num_base_bdevs_operational": 4, 00:15:16.625 "base_bdevs_list": [ 00:15:16.625 { 00:15:16.625 "name": "BaseBdev1", 00:15:16.625 "uuid": "a14eb710-3a5c-5d08-9992-2c767072d3ab", 00:15:16.625 "is_configured": true, 00:15:16.625 "data_offset": 2048, 00:15:16.625 "data_size": 63488 00:15:16.625 }, 00:15:16.625 { 00:15:16.625 "name": "BaseBdev2", 00:15:16.625 "uuid": "e1335cd4-c44e-5b78-bf2c-769465fb28f5", 00:15:16.625 "is_configured": true, 00:15:16.625 "data_offset": 2048, 00:15:16.625 "data_size": 63488 00:15:16.625 }, 00:15:16.625 { 00:15:16.625 "name": "BaseBdev3", 00:15:16.625 "uuid": "503bd7db-4141-578a-9d53-7c4ccfe5bef9", 00:15:16.625 "is_configured": true, 00:15:16.625 "data_offset": 2048, 00:15:16.625 "data_size": 63488 00:15:16.625 }, 00:15:16.625 { 00:15:16.625 "name": "BaseBdev4", 00:15:16.625 "uuid": "446fe1d6-06c7-5983-ac70-d3a048cea7de", 00:15:16.625 "is_configured": true, 00:15:16.625 "data_offset": 2048, 00:15:16.625 "data_size": 63488 00:15:16.625 } 00:15:16.625 ] 00:15:16.625 }' 00:15:16.625 18:13:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.625 18:13:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.194 18:13:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:17.194 18:13:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.194 18:13:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.194 [2024-12-06 18:13:42.452824] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:17.194 [2024-12-06 18:13:42.452865] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:17.194 [2024-12-06 18:13:42.456390] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:17.194 [2024-12-06 18:13:42.456479] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:17.194 [2024-12-06 18:13:42.456640] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:17.194 [2024-12-06 18:13:42.456672] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:15:17.194 18:13:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.194 { 00:15:17.194 "results": [ 00:15:17.194 { 00:15:17.194 "job": "raid_bdev1", 00:15:17.194 "core_mask": "0x1", 00:15:17.194 "workload": "randrw", 00:15:17.195 "percentage": 50, 00:15:17.195 "status": "finished", 00:15:17.195 "queue_depth": 1, 00:15:17.195 "io_size": 131072, 00:15:17.195 "runtime": 1.391247, 00:15:17.195 "iops": 7450.1508359047675, 00:15:17.195 "mibps": 931.2688544880959, 00:15:17.195 "io_failed": 0, 00:15:17.195 "io_timeout": 0, 00:15:17.195 "avg_latency_us": 129.81879787747226, 00:15:17.195 "min_latency_us": 44.68363636363637, 00:15:17.195 "max_latency_us": 2010.7636363636364 00:15:17.195 } 00:15:17.195 ], 00:15:17.195 "core_count": 1 00:15:17.195 } 00:15:17.195 18:13:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75235 00:15:17.195 18:13:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75235 ']' 00:15:17.195 18:13:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75235 00:15:17.195 18:13:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:15:17.195 18:13:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:17.195 18:13:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75235 00:15:17.195 18:13:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:17.195 18:13:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:17.195 killing process with pid 75235 00:15:17.195 18:13:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75235' 00:15:17.195 18:13:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75235 00:15:17.195 18:13:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75235 00:15:17.195 [2024-12-06 18:13:42.487661] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:17.454 [2024-12-06 18:13:42.776309] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:18.391 18:13:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.bUEOc3pj3g 00:15:18.391 18:13:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:18.391 18:13:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:18.391 18:13:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:15:18.391 18:13:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:15:18.391 18:13:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:18.391 18:13:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:18.391 18:13:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:15:18.391 00:15:18.391 real 0m4.794s 00:15:18.391 user 0m5.902s 00:15:18.391 sys 0m0.555s 00:15:18.391 18:13:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:18.391 18:13:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.391 ************************************ 00:15:18.391 END TEST raid_read_error_test 00:15:18.391 ************************************ 00:15:18.650 18:13:43 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:15:18.650 18:13:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:18.650 18:13:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:18.650 18:13:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:18.650 ************************************ 00:15:18.650 START TEST raid_write_error_test 00:15:18.650 ************************************ 00:15:18.650 18:13:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:15:18.650 18:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:15:18.650 18:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:15:18.650 18:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:15:18.650 18:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:18.650 18:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:18.650 18:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:18.650 18:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:18.650 18:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:18.650 18:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:18.650 18:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:18.650 18:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:18.650 18:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:18.650 18:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:18.650 18:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:18.650 18:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:15:18.650 18:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:18.650 18:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:18.650 18:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:18.650 18:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:18.650 18:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:18.650 18:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:18.650 18:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:18.650 18:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:18.650 18:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:18.650 18:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:15:18.650 18:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:15:18.650 18:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:18.650 18:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.wB6Hq5EVLz 00:15:18.650 18:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75380 00:15:18.650 18:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75380 00:15:18.650 18:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:18.650 18:13:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75380 ']' 00:15:18.650 18:13:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.650 18:13:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:18.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.650 18:13:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.650 18:13:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:18.650 18:13:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.650 [2024-12-06 18:13:44.046168] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:15:18.650 [2024-12-06 18:13:44.046306] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75380 ] 00:15:18.909 [2024-12-06 18:13:44.220379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.909 [2024-12-06 18:13:44.350346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.167 [2024-12-06 18:13:44.551456] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:19.167 [2024-12-06 18:13:44.551516] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:19.733 18:13:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:19.733 18:13:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:15:19.733 18:13:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:19.733 18:13:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:19.733 18:13:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.733 18:13:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.733 BaseBdev1_malloc 00:15:19.733 18:13:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.733 18:13:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:19.733 18:13:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.733 18:13:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.733 true 00:15:19.733 18:13:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.733 18:13:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:19.733 18:13:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.733 18:13:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.733 [2024-12-06 18:13:45.092472] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:19.733 [2024-12-06 18:13:45.092535] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.733 [2024-12-06 18:13:45.092565] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:19.733 [2024-12-06 18:13:45.092583] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.733 [2024-12-06 18:13:45.095322] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.733 [2024-12-06 18:13:45.095369] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:19.733 BaseBdev1 00:15:19.733 18:13:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.733 18:13:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:19.733 18:13:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:19.733 18:13:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.733 18:13:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.733 BaseBdev2_malloc 00:15:19.733 18:13:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.733 18:13:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:19.733 18:13:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.733 18:13:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.733 true 00:15:19.733 18:13:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.733 18:13:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:19.733 18:13:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.733 18:13:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.733 [2024-12-06 18:13:45.156409] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:19.733 [2024-12-06 18:13:45.156473] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.733 [2024-12-06 18:13:45.156499] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:19.733 [2024-12-06 18:13:45.156517] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.733 [2024-12-06 18:13:45.159278] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.733 [2024-12-06 18:13:45.159323] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:19.733 BaseBdev2 00:15:19.733 18:13:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.733 18:13:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:19.733 18:13:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:19.733 18:13:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.733 18:13:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.733 BaseBdev3_malloc 00:15:19.733 18:13:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.733 18:13:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:19.733 18:13:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.733 18:13:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.733 true 00:15:19.733 18:13:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.733 18:13:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:19.733 18:13:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.733 18:13:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.733 [2024-12-06 18:13:45.226493] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:19.733 [2024-12-06 18:13:45.226553] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.733 [2024-12-06 18:13:45.226579] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:19.733 [2024-12-06 18:13:45.226609] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.733 [2024-12-06 18:13:45.229367] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.733 [2024-12-06 18:13:45.229413] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:19.733 BaseBdev3 00:15:19.733 18:13:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.733 18:13:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:19.733 18:13:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:19.733 18:13:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.733 18:13:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.991 BaseBdev4_malloc 00:15:19.991 18:13:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.991 18:13:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:15:19.991 18:13:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.991 18:13:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.991 true 00:15:19.991 18:13:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.991 18:13:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:19.991 18:13:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.991 18:13:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.991 [2024-12-06 18:13:45.290215] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:19.991 [2024-12-06 18:13:45.290279] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.991 [2024-12-06 18:13:45.290306] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:19.991 [2024-12-06 18:13:45.290324] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.991 [2024-12-06 18:13:45.293092] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.991 [2024-12-06 18:13:45.293140] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:19.991 BaseBdev4 00:15:19.991 18:13:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.991 18:13:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:15:19.991 18:13:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.991 18:13:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.991 [2024-12-06 18:13:45.302296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:19.991 [2024-12-06 18:13:45.304790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:19.991 [2024-12-06 18:13:45.304927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:19.991 [2024-12-06 18:13:45.305029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:19.991 [2024-12-06 18:13:45.305336] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:15:19.991 [2024-12-06 18:13:45.305373] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:19.991 [2024-12-06 18:13:45.305684] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:15:19.991 [2024-12-06 18:13:45.305940] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:15:19.991 [2024-12-06 18:13:45.305966] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:15:19.991 [2024-12-06 18:13:45.306210] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:19.991 18:13:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.991 18:13:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:19.991 18:13:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:19.991 18:13:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:19.991 18:13:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:19.991 18:13:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:19.991 18:13:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:19.991 18:13:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.991 18:13:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.991 18:13:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.991 18:13:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.991 18:13:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.991 18:13:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.991 18:13:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.991 18:13:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.991 18:13:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.991 18:13:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.991 "name": "raid_bdev1", 00:15:19.991 "uuid": "2be21394-5923-4df4-adc3-94581c619508", 00:15:19.991 "strip_size_kb": 0, 00:15:19.991 "state": "online", 00:15:19.991 "raid_level": "raid1", 00:15:19.991 "superblock": true, 00:15:19.991 "num_base_bdevs": 4, 00:15:19.991 "num_base_bdevs_discovered": 4, 00:15:19.991 "num_base_bdevs_operational": 4, 00:15:19.991 "base_bdevs_list": [ 00:15:19.991 { 00:15:19.991 "name": "BaseBdev1", 00:15:19.991 "uuid": "a8bf1cc9-8a8a-5cbc-bdcc-6049d568b878", 00:15:19.991 "is_configured": true, 00:15:19.991 "data_offset": 2048, 00:15:19.991 "data_size": 63488 00:15:19.991 }, 00:15:19.991 { 00:15:19.991 "name": "BaseBdev2", 00:15:19.991 "uuid": "0d5f2249-85cf-5c48-a64d-3a33b6d946c9", 00:15:19.991 "is_configured": true, 00:15:19.991 "data_offset": 2048, 00:15:19.991 "data_size": 63488 00:15:19.991 }, 00:15:19.991 { 00:15:19.991 "name": "BaseBdev3", 00:15:19.991 "uuid": "ba7a6fea-501c-5b49-abf2-51c666a04cb6", 00:15:19.991 "is_configured": true, 00:15:19.991 "data_offset": 2048, 00:15:19.991 "data_size": 63488 00:15:19.991 }, 00:15:19.991 { 00:15:19.991 "name": "BaseBdev4", 00:15:19.991 "uuid": "4a2c6b2e-b3ff-5f12-b77d-8dad6fa66d9b", 00:15:19.991 "is_configured": true, 00:15:19.992 "data_offset": 2048, 00:15:19.992 "data_size": 63488 00:15:19.992 } 00:15:19.992 ] 00:15:19.992 }' 00:15:19.992 18:13:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.992 18:13:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.557 18:13:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:20.557 18:13:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:20.557 [2024-12-06 18:13:45.923893] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:15:21.494 18:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:21.494 18:13:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.494 18:13:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.494 [2024-12-06 18:13:46.828575] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:15:21.494 [2024-12-06 18:13:46.828638] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:21.494 [2024-12-06 18:13:46.828923] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:15:21.494 18:13:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.494 18:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:21.494 18:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:15:21.494 18:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:15:21.494 18:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:15:21.494 18:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:21.494 18:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:21.494 18:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:21.494 18:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:21.494 18:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:21.494 18:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:21.494 18:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.494 18:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.494 18:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.494 18:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.494 18:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.494 18:13:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.494 18:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.494 18:13:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.494 18:13:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.494 18:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.494 "name": "raid_bdev1", 00:15:21.494 "uuid": "2be21394-5923-4df4-adc3-94581c619508", 00:15:21.494 "strip_size_kb": 0, 00:15:21.494 "state": "online", 00:15:21.494 "raid_level": "raid1", 00:15:21.494 "superblock": true, 00:15:21.494 "num_base_bdevs": 4, 00:15:21.494 "num_base_bdevs_discovered": 3, 00:15:21.494 "num_base_bdevs_operational": 3, 00:15:21.494 "base_bdevs_list": [ 00:15:21.494 { 00:15:21.494 "name": null, 00:15:21.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.494 "is_configured": false, 00:15:21.494 "data_offset": 0, 00:15:21.494 "data_size": 63488 00:15:21.494 }, 00:15:21.494 { 00:15:21.494 "name": "BaseBdev2", 00:15:21.494 "uuid": "0d5f2249-85cf-5c48-a64d-3a33b6d946c9", 00:15:21.494 "is_configured": true, 00:15:21.494 "data_offset": 2048, 00:15:21.494 "data_size": 63488 00:15:21.494 }, 00:15:21.494 { 00:15:21.494 "name": "BaseBdev3", 00:15:21.494 "uuid": "ba7a6fea-501c-5b49-abf2-51c666a04cb6", 00:15:21.494 "is_configured": true, 00:15:21.494 "data_offset": 2048, 00:15:21.494 "data_size": 63488 00:15:21.494 }, 00:15:21.494 { 00:15:21.494 "name": "BaseBdev4", 00:15:21.494 "uuid": "4a2c6b2e-b3ff-5f12-b77d-8dad6fa66d9b", 00:15:21.494 "is_configured": true, 00:15:21.494 "data_offset": 2048, 00:15:21.494 "data_size": 63488 00:15:21.494 } 00:15:21.494 ] 00:15:21.494 }' 00:15:21.494 18:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.494 18:13:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.061 18:13:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:22.061 18:13:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.061 18:13:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.061 [2024-12-06 18:13:47.377218] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:22.061 [2024-12-06 18:13:47.377285] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:22.061 [2024-12-06 18:13:47.380965] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:22.061 [2024-12-06 18:13:47.381032] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:22.061 [2024-12-06 18:13:47.381235] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:22.061 [2024-12-06 18:13:47.381258] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:15:22.061 { 00:15:22.061 "results": [ 00:15:22.061 { 00:15:22.061 "job": "raid_bdev1", 00:15:22.061 "core_mask": "0x1", 00:15:22.061 "workload": "randrw", 00:15:22.061 "percentage": 50, 00:15:22.061 "status": "finished", 00:15:22.061 "queue_depth": 1, 00:15:22.061 "io_size": 131072, 00:15:22.061 "runtime": 1.451243, 00:15:22.061 "iops": 8006.240167911232, 00:15:22.061 "mibps": 1000.780020988904, 00:15:22.061 "io_failed": 0, 00:15:22.061 "io_timeout": 0, 00:15:22.061 "avg_latency_us": 120.40897276404634, 00:15:22.061 "min_latency_us": 42.35636363636364, 00:15:22.061 "max_latency_us": 1951.1854545454546 00:15:22.061 } 00:15:22.061 ], 00:15:22.061 "core_count": 1 00:15:22.061 } 00:15:22.061 18:13:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.061 18:13:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75380 00:15:22.061 18:13:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75380 ']' 00:15:22.061 18:13:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75380 00:15:22.061 18:13:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:15:22.061 18:13:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:22.061 18:13:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75380 00:15:22.061 18:13:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:22.061 18:13:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:22.061 killing process with pid 75380 00:15:22.061 18:13:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75380' 00:15:22.061 18:13:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75380 00:15:22.061 [2024-12-06 18:13:47.419044] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:22.061 18:13:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75380 00:15:22.352 [2024-12-06 18:13:47.679350] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:23.322 18:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:23.322 18:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.wB6Hq5EVLz 00:15:23.322 18:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:23.322 18:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:15:23.322 18:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:15:23.322 18:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:23.323 18:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:23.323 18:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:15:23.323 00:15:23.323 real 0m4.784s 00:15:23.323 user 0m5.973s 00:15:23.323 sys 0m0.541s 00:15:23.323 18:13:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:23.323 18:13:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.323 ************************************ 00:15:23.323 END TEST raid_write_error_test 00:15:23.323 ************************************ 00:15:23.323 18:13:48 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:15:23.323 18:13:48 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:15:23.323 18:13:48 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:15:23.323 18:13:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:23.323 18:13:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:23.323 18:13:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:23.323 ************************************ 00:15:23.323 START TEST raid_rebuild_test 00:15:23.323 ************************************ 00:15:23.323 18:13:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:15:23.323 18:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:23.323 18:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:23.323 18:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:23.323 18:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:23.323 18:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:23.323 18:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:23.323 18:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:23.323 18:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:23.323 18:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:23.323 18:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:23.323 18:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:23.323 18:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:23.323 18:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:23.323 18:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:23.323 18:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:23.323 18:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:23.323 18:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:23.323 18:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:23.323 18:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:23.323 18:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:23.323 18:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:23.323 18:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:23.323 18:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:23.323 18:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75524 00:15:23.323 18:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75524 00:15:23.323 18:13:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:23.323 18:13:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75524 ']' 00:15:23.323 18:13:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:23.323 18:13:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:23.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:23.323 18:13:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:23.323 18:13:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:23.323 18:13:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.582 [2024-12-06 18:13:48.870826] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:15:23.582 [2024-12-06 18:13:48.871009] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75524 ] 00:15:23.582 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:23.582 Zero copy mechanism will not be used. 00:15:23.582 [2024-12-06 18:13:49.041688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.840 [2024-12-06 18:13:49.170557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.097 [2024-12-06 18:13:49.365990] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:24.097 [2024-12-06 18:13:49.366074] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:24.355 18:13:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:24.355 18:13:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:15:24.355 18:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:24.355 18:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:24.355 18:13:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.355 18:13:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.613 BaseBdev1_malloc 00:15:24.614 18:13:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.614 18:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:24.614 18:13:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.614 18:13:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.614 [2024-12-06 18:13:49.915536] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:24.614 [2024-12-06 18:13:49.915621] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:24.614 [2024-12-06 18:13:49.915663] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:24.614 [2024-12-06 18:13:49.915679] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:24.614 [2024-12-06 18:13:49.918467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:24.614 [2024-12-06 18:13:49.918528] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:24.614 BaseBdev1 00:15:24.614 18:13:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.614 18:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:24.614 18:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:24.614 18:13:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.614 18:13:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.614 BaseBdev2_malloc 00:15:24.614 18:13:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.614 18:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:24.614 18:13:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.614 18:13:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.614 [2024-12-06 18:13:49.970261] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:24.614 [2024-12-06 18:13:49.970346] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:24.614 [2024-12-06 18:13:49.970375] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:24.614 [2024-12-06 18:13:49.970391] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:24.614 [2024-12-06 18:13:49.973165] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:24.614 [2024-12-06 18:13:49.973220] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:24.614 BaseBdev2 00:15:24.614 18:13:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.614 18:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:24.614 18:13:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.614 18:13:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.614 spare_malloc 00:15:24.614 18:13:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.614 18:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:24.614 18:13:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.614 18:13:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.614 spare_delay 00:15:24.614 18:13:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.614 18:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:24.614 18:13:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.614 18:13:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.614 [2024-12-06 18:13:50.041505] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:24.614 [2024-12-06 18:13:50.041589] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:24.614 [2024-12-06 18:13:50.041616] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:24.614 [2024-12-06 18:13:50.041645] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:24.614 [2024-12-06 18:13:50.044516] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:24.614 [2024-12-06 18:13:50.044577] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:24.614 spare 00:15:24.614 18:13:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.614 18:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:24.614 18:13:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.614 18:13:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.614 [2024-12-06 18:13:50.049585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:24.614 [2024-12-06 18:13:50.052005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:24.614 [2024-12-06 18:13:50.052147] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:24.614 [2024-12-06 18:13:50.052169] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:24.614 [2024-12-06 18:13:50.052522] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:24.614 [2024-12-06 18:13:50.052733] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:24.614 [2024-12-06 18:13:50.052762] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:24.614 [2024-12-06 18:13:50.052966] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:24.614 18:13:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.614 18:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:24.614 18:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:24.614 18:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:24.614 18:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:24.614 18:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:24.614 18:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:24.614 18:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.614 18:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.614 18:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.614 18:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.614 18:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.614 18:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.614 18:13:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.614 18:13:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.614 18:13:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.614 18:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.614 "name": "raid_bdev1", 00:15:24.614 "uuid": "309791a1-0c19-4982-9cd1-2faf4409cfd3", 00:15:24.614 "strip_size_kb": 0, 00:15:24.614 "state": "online", 00:15:24.614 "raid_level": "raid1", 00:15:24.614 "superblock": false, 00:15:24.614 "num_base_bdevs": 2, 00:15:24.614 "num_base_bdevs_discovered": 2, 00:15:24.614 "num_base_bdevs_operational": 2, 00:15:24.614 "base_bdevs_list": [ 00:15:24.614 { 00:15:24.614 "name": "BaseBdev1", 00:15:24.614 "uuid": "c8d614e1-3480-5ee3-89e8-1fcfe639ce0a", 00:15:24.614 "is_configured": true, 00:15:24.614 "data_offset": 0, 00:15:24.614 "data_size": 65536 00:15:24.614 }, 00:15:24.614 { 00:15:24.614 "name": "BaseBdev2", 00:15:24.614 "uuid": "00c12263-9b3e-5ae6-8e8d-c85fa24093e4", 00:15:24.615 "is_configured": true, 00:15:24.615 "data_offset": 0, 00:15:24.615 "data_size": 65536 00:15:24.615 } 00:15:24.615 ] 00:15:24.615 }' 00:15:24.615 18:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.615 18:13:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.182 18:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:25.182 18:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:25.182 18:13:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.182 18:13:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.182 [2024-12-06 18:13:50.602187] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:25.182 18:13:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.182 18:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:15:25.182 18:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.182 18:13:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.182 18:13:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.182 18:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:25.182 18:13:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.441 18:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:25.441 18:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:25.441 18:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:25.441 18:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:25.441 18:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:25.441 18:13:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:25.441 18:13:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:25.441 18:13:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:25.441 18:13:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:25.441 18:13:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:25.441 18:13:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:25.441 18:13:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:25.441 18:13:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:25.441 18:13:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:25.699 [2024-12-06 18:13:50.973903] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:25.699 /dev/nbd0 00:15:25.699 18:13:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:25.699 18:13:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:25.699 18:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:25.699 18:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:25.699 18:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:25.699 18:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:25.699 18:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:25.699 18:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:25.699 18:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:25.699 18:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:25.699 18:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:25.699 1+0 records in 00:15:25.699 1+0 records out 00:15:25.699 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000425534 s, 9.6 MB/s 00:15:25.699 18:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:25.699 18:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:25.699 18:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:25.699 18:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:25.699 18:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:25.699 18:13:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:25.699 18:13:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:25.699 18:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:25.699 18:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:25.699 18:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:15:32.287 65536+0 records in 00:15:32.287 65536+0 records out 00:15:32.287 33554432 bytes (34 MB, 32 MiB) copied, 6.2649 s, 5.4 MB/s 00:15:32.287 18:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:32.287 18:13:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:32.287 18:13:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:32.287 18:13:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:32.287 18:13:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:32.287 18:13:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:32.287 18:13:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:32.287 18:13:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:32.287 [2024-12-06 18:13:57.604392] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:32.288 18:13:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:32.288 18:13:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:32.288 18:13:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:32.288 18:13:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:32.288 18:13:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:32.288 18:13:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:32.288 18:13:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:32.288 18:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:32.288 18:13:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.288 18:13:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.288 [2024-12-06 18:13:57.617595] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:32.288 18:13:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.288 18:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:32.288 18:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:32.288 18:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:32.288 18:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:32.288 18:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:32.288 18:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:32.288 18:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.288 18:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.288 18:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.288 18:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.288 18:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.288 18:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.288 18:13:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.288 18:13:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.288 18:13:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.288 18:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.288 "name": "raid_bdev1", 00:15:32.288 "uuid": "309791a1-0c19-4982-9cd1-2faf4409cfd3", 00:15:32.288 "strip_size_kb": 0, 00:15:32.288 "state": "online", 00:15:32.288 "raid_level": "raid1", 00:15:32.288 "superblock": false, 00:15:32.288 "num_base_bdevs": 2, 00:15:32.288 "num_base_bdevs_discovered": 1, 00:15:32.288 "num_base_bdevs_operational": 1, 00:15:32.288 "base_bdevs_list": [ 00:15:32.288 { 00:15:32.288 "name": null, 00:15:32.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.288 "is_configured": false, 00:15:32.288 "data_offset": 0, 00:15:32.288 "data_size": 65536 00:15:32.288 }, 00:15:32.288 { 00:15:32.288 "name": "BaseBdev2", 00:15:32.288 "uuid": "00c12263-9b3e-5ae6-8e8d-c85fa24093e4", 00:15:32.288 "is_configured": true, 00:15:32.288 "data_offset": 0, 00:15:32.288 "data_size": 65536 00:15:32.288 } 00:15:32.288 ] 00:15:32.288 }' 00:15:32.288 18:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.288 18:13:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.856 18:13:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:32.856 18:13:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.856 18:13:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.856 [2024-12-06 18:13:58.097760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:32.856 [2024-12-06 18:13:58.114670] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:15:32.856 18:13:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.856 18:13:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:32.856 [2024-12-06 18:13:58.119064] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:33.791 18:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:33.791 18:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:33.791 18:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:33.791 18:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:33.791 18:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:33.791 18:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.791 18:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.791 18:13:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.791 18:13:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.791 18:13:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.791 18:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:33.791 "name": "raid_bdev1", 00:15:33.791 "uuid": "309791a1-0c19-4982-9cd1-2faf4409cfd3", 00:15:33.791 "strip_size_kb": 0, 00:15:33.791 "state": "online", 00:15:33.791 "raid_level": "raid1", 00:15:33.791 "superblock": false, 00:15:33.791 "num_base_bdevs": 2, 00:15:33.791 "num_base_bdevs_discovered": 2, 00:15:33.791 "num_base_bdevs_operational": 2, 00:15:33.791 "process": { 00:15:33.791 "type": "rebuild", 00:15:33.791 "target": "spare", 00:15:33.791 "progress": { 00:15:33.791 "blocks": 20480, 00:15:33.791 "percent": 31 00:15:33.791 } 00:15:33.791 }, 00:15:33.791 "base_bdevs_list": [ 00:15:33.791 { 00:15:33.791 "name": "spare", 00:15:33.791 "uuid": "803e38fa-37e3-584c-878f-ca425e889caf", 00:15:33.791 "is_configured": true, 00:15:33.791 "data_offset": 0, 00:15:33.791 "data_size": 65536 00:15:33.791 }, 00:15:33.791 { 00:15:33.791 "name": "BaseBdev2", 00:15:33.791 "uuid": "00c12263-9b3e-5ae6-8e8d-c85fa24093e4", 00:15:33.791 "is_configured": true, 00:15:33.791 "data_offset": 0, 00:15:33.791 "data_size": 65536 00:15:33.791 } 00:15:33.791 ] 00:15:33.791 }' 00:15:33.791 18:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:33.791 18:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:33.791 18:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:33.791 18:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:33.791 18:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:33.791 18:13:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.791 18:13:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.791 [2024-12-06 18:13:59.288482] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:34.051 [2024-12-06 18:13:59.327862] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:34.051 [2024-12-06 18:13:59.327933] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:34.051 [2024-12-06 18:13:59.327955] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:34.051 [2024-12-06 18:13:59.327972] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:34.051 18:13:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.051 18:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:34.051 18:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:34.051 18:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:34.051 18:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:34.051 18:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:34.051 18:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:34.051 18:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.051 18:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.051 18:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.051 18:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.051 18:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.051 18:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.051 18:13:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.051 18:13:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.051 18:13:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.051 18:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.051 "name": "raid_bdev1", 00:15:34.051 "uuid": "309791a1-0c19-4982-9cd1-2faf4409cfd3", 00:15:34.051 "strip_size_kb": 0, 00:15:34.051 "state": "online", 00:15:34.051 "raid_level": "raid1", 00:15:34.051 "superblock": false, 00:15:34.051 "num_base_bdevs": 2, 00:15:34.051 "num_base_bdevs_discovered": 1, 00:15:34.051 "num_base_bdevs_operational": 1, 00:15:34.051 "base_bdevs_list": [ 00:15:34.051 { 00:15:34.051 "name": null, 00:15:34.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.051 "is_configured": false, 00:15:34.051 "data_offset": 0, 00:15:34.051 "data_size": 65536 00:15:34.051 }, 00:15:34.051 { 00:15:34.051 "name": "BaseBdev2", 00:15:34.051 "uuid": "00c12263-9b3e-5ae6-8e8d-c85fa24093e4", 00:15:34.051 "is_configured": true, 00:15:34.051 "data_offset": 0, 00:15:34.051 "data_size": 65536 00:15:34.051 } 00:15:34.051 ] 00:15:34.051 }' 00:15:34.051 18:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.051 18:13:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.620 18:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:34.620 18:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:34.620 18:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:34.620 18:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:34.620 18:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:34.620 18:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.620 18:13:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.620 18:13:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.621 18:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.621 18:13:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.621 18:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:34.621 "name": "raid_bdev1", 00:15:34.621 "uuid": "309791a1-0c19-4982-9cd1-2faf4409cfd3", 00:15:34.621 "strip_size_kb": 0, 00:15:34.621 "state": "online", 00:15:34.621 "raid_level": "raid1", 00:15:34.621 "superblock": false, 00:15:34.621 "num_base_bdevs": 2, 00:15:34.621 "num_base_bdevs_discovered": 1, 00:15:34.621 "num_base_bdevs_operational": 1, 00:15:34.621 "base_bdevs_list": [ 00:15:34.621 { 00:15:34.621 "name": null, 00:15:34.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.621 "is_configured": false, 00:15:34.621 "data_offset": 0, 00:15:34.621 "data_size": 65536 00:15:34.621 }, 00:15:34.621 { 00:15:34.621 "name": "BaseBdev2", 00:15:34.621 "uuid": "00c12263-9b3e-5ae6-8e8d-c85fa24093e4", 00:15:34.621 "is_configured": true, 00:15:34.621 "data_offset": 0, 00:15:34.621 "data_size": 65536 00:15:34.621 } 00:15:34.621 ] 00:15:34.621 }' 00:15:34.621 18:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:34.621 18:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:34.621 18:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:34.621 18:14:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:34.621 18:14:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:34.621 18:14:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.621 18:14:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.621 [2024-12-06 18:14:00.036780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:34.621 [2024-12-06 18:14:00.052945] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:15:34.621 18:14:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.621 18:14:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:34.621 [2024-12-06 18:14:00.055502] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:35.631 18:14:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:35.631 18:14:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:35.631 18:14:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:35.631 18:14:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:35.631 18:14:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:35.631 18:14:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.632 18:14:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.632 18:14:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.632 18:14:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.632 18:14:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.632 18:14:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:35.632 "name": "raid_bdev1", 00:15:35.632 "uuid": "309791a1-0c19-4982-9cd1-2faf4409cfd3", 00:15:35.632 "strip_size_kb": 0, 00:15:35.632 "state": "online", 00:15:35.632 "raid_level": "raid1", 00:15:35.632 "superblock": false, 00:15:35.632 "num_base_bdevs": 2, 00:15:35.632 "num_base_bdevs_discovered": 2, 00:15:35.632 "num_base_bdevs_operational": 2, 00:15:35.632 "process": { 00:15:35.632 "type": "rebuild", 00:15:35.632 "target": "spare", 00:15:35.632 "progress": { 00:15:35.632 "blocks": 20480, 00:15:35.632 "percent": 31 00:15:35.632 } 00:15:35.632 }, 00:15:35.632 "base_bdevs_list": [ 00:15:35.632 { 00:15:35.632 "name": "spare", 00:15:35.632 "uuid": "803e38fa-37e3-584c-878f-ca425e889caf", 00:15:35.632 "is_configured": true, 00:15:35.632 "data_offset": 0, 00:15:35.632 "data_size": 65536 00:15:35.632 }, 00:15:35.632 { 00:15:35.632 "name": "BaseBdev2", 00:15:35.632 "uuid": "00c12263-9b3e-5ae6-8e8d-c85fa24093e4", 00:15:35.632 "is_configured": true, 00:15:35.632 "data_offset": 0, 00:15:35.632 "data_size": 65536 00:15:35.632 } 00:15:35.632 ] 00:15:35.632 }' 00:15:35.632 18:14:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:35.890 18:14:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:35.890 18:14:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:35.890 18:14:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:35.890 18:14:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:35.890 18:14:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:35.890 18:14:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:35.890 18:14:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:35.890 18:14:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=397 00:15:35.890 18:14:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:35.890 18:14:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:35.890 18:14:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:35.890 18:14:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:35.890 18:14:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:35.890 18:14:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:35.890 18:14:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.890 18:14:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.890 18:14:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.890 18:14:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.890 18:14:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.890 18:14:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:35.890 "name": "raid_bdev1", 00:15:35.890 "uuid": "309791a1-0c19-4982-9cd1-2faf4409cfd3", 00:15:35.890 "strip_size_kb": 0, 00:15:35.890 "state": "online", 00:15:35.890 "raid_level": "raid1", 00:15:35.890 "superblock": false, 00:15:35.890 "num_base_bdevs": 2, 00:15:35.890 "num_base_bdevs_discovered": 2, 00:15:35.890 "num_base_bdevs_operational": 2, 00:15:35.890 "process": { 00:15:35.890 "type": "rebuild", 00:15:35.890 "target": "spare", 00:15:35.890 "progress": { 00:15:35.890 "blocks": 22528, 00:15:35.890 "percent": 34 00:15:35.890 } 00:15:35.890 }, 00:15:35.890 "base_bdevs_list": [ 00:15:35.890 { 00:15:35.890 "name": "spare", 00:15:35.890 "uuid": "803e38fa-37e3-584c-878f-ca425e889caf", 00:15:35.890 "is_configured": true, 00:15:35.890 "data_offset": 0, 00:15:35.890 "data_size": 65536 00:15:35.890 }, 00:15:35.890 { 00:15:35.890 "name": "BaseBdev2", 00:15:35.890 "uuid": "00c12263-9b3e-5ae6-8e8d-c85fa24093e4", 00:15:35.890 "is_configured": true, 00:15:35.890 "data_offset": 0, 00:15:35.890 "data_size": 65536 00:15:35.890 } 00:15:35.890 ] 00:15:35.890 }' 00:15:35.890 18:14:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:35.890 18:14:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:35.890 18:14:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:35.890 18:14:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:35.890 18:14:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:37.272 18:14:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:37.272 18:14:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:37.272 18:14:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:37.272 18:14:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:37.272 18:14:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:37.272 18:14:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:37.272 18:14:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.272 18:14:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.272 18:14:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.272 18:14:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.272 18:14:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.272 18:14:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:37.272 "name": "raid_bdev1", 00:15:37.272 "uuid": "309791a1-0c19-4982-9cd1-2faf4409cfd3", 00:15:37.272 "strip_size_kb": 0, 00:15:37.272 "state": "online", 00:15:37.272 "raid_level": "raid1", 00:15:37.272 "superblock": false, 00:15:37.272 "num_base_bdevs": 2, 00:15:37.272 "num_base_bdevs_discovered": 2, 00:15:37.272 "num_base_bdevs_operational": 2, 00:15:37.272 "process": { 00:15:37.272 "type": "rebuild", 00:15:37.272 "target": "spare", 00:15:37.272 "progress": { 00:15:37.272 "blocks": 47104, 00:15:37.272 "percent": 71 00:15:37.272 } 00:15:37.272 }, 00:15:37.272 "base_bdevs_list": [ 00:15:37.272 { 00:15:37.272 "name": "spare", 00:15:37.272 "uuid": "803e38fa-37e3-584c-878f-ca425e889caf", 00:15:37.272 "is_configured": true, 00:15:37.272 "data_offset": 0, 00:15:37.272 "data_size": 65536 00:15:37.272 }, 00:15:37.272 { 00:15:37.272 "name": "BaseBdev2", 00:15:37.272 "uuid": "00c12263-9b3e-5ae6-8e8d-c85fa24093e4", 00:15:37.272 "is_configured": true, 00:15:37.272 "data_offset": 0, 00:15:37.272 "data_size": 65536 00:15:37.272 } 00:15:37.272 ] 00:15:37.272 }' 00:15:37.272 18:14:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:37.272 18:14:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:37.272 18:14:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.272 18:14:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:37.272 18:14:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:37.840 [2024-12-06 18:14:03.279123] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:37.840 [2024-12-06 18:14:03.279235] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:37.840 [2024-12-06 18:14:03.279298] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:38.098 18:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:38.098 18:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:38.098 18:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:38.098 18:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:38.098 18:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:38.098 18:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:38.098 18:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.098 18:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.098 18:14:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.098 18:14:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.098 18:14:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.098 18:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:38.098 "name": "raid_bdev1", 00:15:38.098 "uuid": "309791a1-0c19-4982-9cd1-2faf4409cfd3", 00:15:38.098 "strip_size_kb": 0, 00:15:38.098 "state": "online", 00:15:38.098 "raid_level": "raid1", 00:15:38.098 "superblock": false, 00:15:38.098 "num_base_bdevs": 2, 00:15:38.098 "num_base_bdevs_discovered": 2, 00:15:38.098 "num_base_bdevs_operational": 2, 00:15:38.098 "base_bdevs_list": [ 00:15:38.098 { 00:15:38.098 "name": "spare", 00:15:38.098 "uuid": "803e38fa-37e3-584c-878f-ca425e889caf", 00:15:38.098 "is_configured": true, 00:15:38.098 "data_offset": 0, 00:15:38.098 "data_size": 65536 00:15:38.098 }, 00:15:38.098 { 00:15:38.098 "name": "BaseBdev2", 00:15:38.098 "uuid": "00c12263-9b3e-5ae6-8e8d-c85fa24093e4", 00:15:38.098 "is_configured": true, 00:15:38.098 "data_offset": 0, 00:15:38.098 "data_size": 65536 00:15:38.098 } 00:15:38.098 ] 00:15:38.098 }' 00:15:38.098 18:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:38.356 18:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:38.356 18:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:38.356 18:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:38.356 18:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:38.356 18:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:38.356 18:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:38.356 18:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:38.356 18:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:38.357 18:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:38.357 18:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.357 18:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.357 18:14:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.357 18:14:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.357 18:14:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.357 18:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:38.357 "name": "raid_bdev1", 00:15:38.357 "uuid": "309791a1-0c19-4982-9cd1-2faf4409cfd3", 00:15:38.357 "strip_size_kb": 0, 00:15:38.357 "state": "online", 00:15:38.357 "raid_level": "raid1", 00:15:38.357 "superblock": false, 00:15:38.357 "num_base_bdevs": 2, 00:15:38.357 "num_base_bdevs_discovered": 2, 00:15:38.357 "num_base_bdevs_operational": 2, 00:15:38.357 "base_bdevs_list": [ 00:15:38.357 { 00:15:38.357 "name": "spare", 00:15:38.357 "uuid": "803e38fa-37e3-584c-878f-ca425e889caf", 00:15:38.357 "is_configured": true, 00:15:38.357 "data_offset": 0, 00:15:38.357 "data_size": 65536 00:15:38.357 }, 00:15:38.357 { 00:15:38.357 "name": "BaseBdev2", 00:15:38.357 "uuid": "00c12263-9b3e-5ae6-8e8d-c85fa24093e4", 00:15:38.357 "is_configured": true, 00:15:38.357 "data_offset": 0, 00:15:38.357 "data_size": 65536 00:15:38.357 } 00:15:38.357 ] 00:15:38.357 }' 00:15:38.357 18:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:38.357 18:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:38.357 18:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:38.357 18:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:38.357 18:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:38.357 18:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:38.357 18:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:38.357 18:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:38.357 18:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:38.357 18:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:38.357 18:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.357 18:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.357 18:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.357 18:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.357 18:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.357 18:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.357 18:14:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.357 18:14:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.616 18:14:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.616 18:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.616 "name": "raid_bdev1", 00:15:38.616 "uuid": "309791a1-0c19-4982-9cd1-2faf4409cfd3", 00:15:38.616 "strip_size_kb": 0, 00:15:38.616 "state": "online", 00:15:38.616 "raid_level": "raid1", 00:15:38.616 "superblock": false, 00:15:38.616 "num_base_bdevs": 2, 00:15:38.616 "num_base_bdevs_discovered": 2, 00:15:38.616 "num_base_bdevs_operational": 2, 00:15:38.616 "base_bdevs_list": [ 00:15:38.616 { 00:15:38.616 "name": "spare", 00:15:38.616 "uuid": "803e38fa-37e3-584c-878f-ca425e889caf", 00:15:38.616 "is_configured": true, 00:15:38.616 "data_offset": 0, 00:15:38.616 "data_size": 65536 00:15:38.616 }, 00:15:38.616 { 00:15:38.616 "name": "BaseBdev2", 00:15:38.616 "uuid": "00c12263-9b3e-5ae6-8e8d-c85fa24093e4", 00:15:38.616 "is_configured": true, 00:15:38.616 "data_offset": 0, 00:15:38.616 "data_size": 65536 00:15:38.616 } 00:15:38.616 ] 00:15:38.616 }' 00:15:38.616 18:14:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.616 18:14:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.874 18:14:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:38.874 18:14:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.874 18:14:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.874 [2024-12-06 18:14:04.375160] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:38.874 [2024-12-06 18:14:04.375205] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:38.874 [2024-12-06 18:14:04.375304] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:38.874 [2024-12-06 18:14:04.375395] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:38.874 [2024-12-06 18:14:04.375421] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:38.874 18:14:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.874 18:14:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.874 18:14:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:38.874 18:14:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.874 18:14:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.874 18:14:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.133 18:14:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:39.133 18:14:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:39.133 18:14:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:39.133 18:14:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:39.133 18:14:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:39.133 18:14:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:39.133 18:14:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:39.133 18:14:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:39.133 18:14:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:39.133 18:14:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:39.133 18:14:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:39.133 18:14:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:39.133 18:14:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:39.400 /dev/nbd0 00:15:39.400 18:14:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:39.400 18:14:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:39.400 18:14:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:39.400 18:14:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:39.400 18:14:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:39.400 18:14:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:39.400 18:14:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:39.400 18:14:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:39.400 18:14:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:39.400 18:14:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:39.400 18:14:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:39.400 1+0 records in 00:15:39.400 1+0 records out 00:15:39.400 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000389852 s, 10.5 MB/s 00:15:39.400 18:14:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:39.400 18:14:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:39.400 18:14:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:39.400 18:14:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:39.400 18:14:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:39.400 18:14:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:39.400 18:14:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:39.400 18:14:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:39.661 /dev/nbd1 00:15:39.661 18:14:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:39.661 18:14:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:39.662 18:14:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:39.662 18:14:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:39.662 18:14:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:39.662 18:14:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:39.662 18:14:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:39.662 18:14:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:39.662 18:14:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:39.662 18:14:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:39.662 18:14:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:39.662 1+0 records in 00:15:39.662 1+0 records out 00:15:39.662 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381486 s, 10.7 MB/s 00:15:39.662 18:14:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:39.662 18:14:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:39.662 18:14:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:39.662 18:14:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:39.662 18:14:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:39.662 18:14:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:39.662 18:14:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:39.662 18:14:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:39.919 18:14:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:39.919 18:14:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:39.919 18:14:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:39.919 18:14:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:39.919 18:14:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:39.919 18:14:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:39.919 18:14:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:40.177 18:14:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:40.177 18:14:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:40.177 18:14:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:40.177 18:14:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:40.177 18:14:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:40.177 18:14:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:40.177 18:14:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:40.177 18:14:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:40.177 18:14:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:40.177 18:14:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:40.434 18:14:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:40.434 18:14:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:40.434 18:14:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:40.434 18:14:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:40.434 18:14:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:40.434 18:14:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:40.434 18:14:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:40.434 18:14:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:40.434 18:14:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:40.434 18:14:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75524 00:15:40.434 18:14:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75524 ']' 00:15:40.434 18:14:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75524 00:15:40.434 18:14:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:15:40.434 18:14:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:40.434 18:14:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75524 00:15:40.707 18:14:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:40.707 18:14:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:40.707 18:14:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75524' 00:15:40.707 killing process with pid 75524 00:15:40.707 18:14:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75524 00:15:40.707 Received shutdown signal, test time was about 60.000000 seconds 00:15:40.707 00:15:40.707 Latency(us) 00:15:40.707 [2024-12-06T18:14:06.227Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:40.707 [2024-12-06T18:14:06.227Z] =================================================================================================================== 00:15:40.707 [2024-12-06T18:14:06.227Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:40.707 [2024-12-06 18:14:05.964711] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:40.707 18:14:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75524 00:15:40.964 [2024-12-06 18:14:06.240311] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:41.899 18:14:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:41.899 00:15:41.899 real 0m18.520s 00:15:41.899 user 0m21.509s 00:15:41.899 sys 0m3.383s 00:15:41.899 18:14:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:41.899 18:14:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.899 ************************************ 00:15:41.899 END TEST raid_rebuild_test 00:15:41.899 ************************************ 00:15:41.899 18:14:07 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:15:41.899 18:14:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:41.899 18:14:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:41.899 18:14:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:41.899 ************************************ 00:15:41.899 START TEST raid_rebuild_test_sb 00:15:41.899 ************************************ 00:15:41.899 18:14:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:15:41.899 18:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:41.899 18:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:41.899 18:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:41.899 18:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:41.899 18:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:41.899 18:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:41.899 18:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:41.899 18:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:41.899 18:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:41.899 18:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:41.899 18:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:41.899 18:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:41.899 18:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:41.899 18:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:41.899 18:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:41.899 18:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:41.899 18:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:41.899 18:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:41.899 18:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:41.899 18:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:41.899 18:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:41.899 18:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:41.899 18:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:41.899 18:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:41.899 18:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75970 00:15:41.899 18:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75970 00:15:41.899 18:14:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75970 ']' 00:15:41.899 18:14:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.899 18:14:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:41.899 18:14:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:41.899 18:14:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.900 18:14:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:41.900 18:14:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.159 [2024-12-06 18:14:07.471685] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:15:42.159 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:42.159 Zero copy mechanism will not be used. 00:15:42.159 [2024-12-06 18:14:07.471878] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75970 ] 00:15:42.159 [2024-12-06 18:14:07.661164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.416 [2024-12-06 18:14:07.817461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.675 [2024-12-06 18:14:08.020379] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:42.675 [2024-12-06 18:14:08.020461] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:42.934 18:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:42.934 18:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:42.934 18:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:42.934 18:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:42.934 18:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.934 18:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.934 BaseBdev1_malloc 00:15:42.934 18:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.934 18:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:42.934 18:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.934 18:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.934 [2024-12-06 18:14:08.423636] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:42.934 [2024-12-06 18:14:08.423709] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:42.934 [2024-12-06 18:14:08.423741] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:42.934 [2024-12-06 18:14:08.423760] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:42.934 [2024-12-06 18:14:08.426555] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:42.934 [2024-12-06 18:14:08.426611] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:42.934 BaseBdev1 00:15:42.934 18:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.934 18:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:42.934 18:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:42.934 18:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.934 18:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.193 BaseBdev2_malloc 00:15:43.193 18:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.193 18:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:43.193 18:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.193 18:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.193 [2024-12-06 18:14:08.475658] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:43.193 [2024-12-06 18:14:08.475748] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.193 [2024-12-06 18:14:08.475797] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:43.193 [2024-12-06 18:14:08.475819] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.193 [2024-12-06 18:14:08.478680] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.193 [2024-12-06 18:14:08.478724] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:43.193 BaseBdev2 00:15:43.193 18:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.193 18:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:43.193 18:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.193 18:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.193 spare_malloc 00:15:43.193 18:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.193 18:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:43.193 18:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.193 18:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.193 spare_delay 00:15:43.193 18:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.193 18:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:43.193 18:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.193 18:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.193 [2024-12-06 18:14:08.547182] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:43.193 [2024-12-06 18:14:08.547266] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.193 [2024-12-06 18:14:08.547296] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:43.193 [2024-12-06 18:14:08.547314] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.193 [2024-12-06 18:14:08.550141] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.193 [2024-12-06 18:14:08.550188] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:43.193 spare 00:15:43.193 18:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.193 18:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:43.193 18:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.193 18:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.193 [2024-12-06 18:14:08.555254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:43.194 [2024-12-06 18:14:08.557787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:43.194 [2024-12-06 18:14:08.558016] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:43.194 [2024-12-06 18:14:08.558041] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:43.194 [2024-12-06 18:14:08.558346] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:43.194 [2024-12-06 18:14:08.558582] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:43.194 [2024-12-06 18:14:08.558612] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:43.194 [2024-12-06 18:14:08.558810] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:43.194 18:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.194 18:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:43.194 18:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:43.194 18:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:43.194 18:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:43.194 18:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:43.194 18:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:43.194 18:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.194 18:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.194 18:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.194 18:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.194 18:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.194 18:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.194 18:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.194 18:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.194 18:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.194 18:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.194 "name": "raid_bdev1", 00:15:43.194 "uuid": "2c0e0686-a21a-4d9e-ab7a-ae8021aeab37", 00:15:43.194 "strip_size_kb": 0, 00:15:43.194 "state": "online", 00:15:43.194 "raid_level": "raid1", 00:15:43.194 "superblock": true, 00:15:43.194 "num_base_bdevs": 2, 00:15:43.194 "num_base_bdevs_discovered": 2, 00:15:43.194 "num_base_bdevs_operational": 2, 00:15:43.194 "base_bdevs_list": [ 00:15:43.194 { 00:15:43.194 "name": "BaseBdev1", 00:15:43.194 "uuid": "6e0228df-7b44-598f-a64e-13938cd3eaff", 00:15:43.194 "is_configured": true, 00:15:43.194 "data_offset": 2048, 00:15:43.194 "data_size": 63488 00:15:43.194 }, 00:15:43.194 { 00:15:43.194 "name": "BaseBdev2", 00:15:43.194 "uuid": "fefeaef7-c4a1-5290-994b-46eb45d0f5d5", 00:15:43.194 "is_configured": true, 00:15:43.194 "data_offset": 2048, 00:15:43.194 "data_size": 63488 00:15:43.194 } 00:15:43.194 ] 00:15:43.194 }' 00:15:43.194 18:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.194 18:14:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.762 18:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:43.762 18:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:43.762 18:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.762 18:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.762 [2024-12-06 18:14:09.075725] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:43.762 18:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.762 18:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:43.762 18:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.762 18:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.762 18:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.762 18:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:43.762 18:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.762 18:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:43.762 18:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:43.762 18:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:43.762 18:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:43.762 18:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:43.762 18:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:43.762 18:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:43.762 18:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:43.762 18:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:43.762 18:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:43.762 18:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:43.762 18:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:43.762 18:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:43.762 18:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:44.021 [2024-12-06 18:14:09.463553] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:44.021 /dev/nbd0 00:15:44.021 18:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:44.021 18:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:44.021 18:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:44.021 18:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:44.021 18:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:44.021 18:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:44.021 18:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:44.021 18:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:44.021 18:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:44.021 18:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:44.021 18:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:44.021 1+0 records in 00:15:44.021 1+0 records out 00:15:44.021 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292269 s, 14.0 MB/s 00:15:44.021 18:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.021 18:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:44.021 18:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.021 18:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:44.021 18:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:44.021 18:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:44.021 18:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:44.021 18:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:44.021 18:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:44.021 18:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:15:52.152 63488+0 records in 00:15:52.152 63488+0 records out 00:15:52.152 32505856 bytes (33 MB, 31 MiB) copied, 6.74133 s, 4.8 MB/s 00:15:52.152 18:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:52.152 18:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:52.152 18:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:52.152 18:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:52.152 18:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:52.152 18:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:52.152 18:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:52.152 [2024-12-06 18:14:16.584638] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:52.152 18:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:52.152 18:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:52.152 18:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:52.152 18:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:52.152 18:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:52.152 18:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:52.152 18:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:52.152 18:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:52.152 18:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:52.152 18:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.152 18:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.152 [2024-12-06 18:14:16.600745] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:52.152 18:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.152 18:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:52.152 18:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:52.152 18:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:52.152 18:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:52.152 18:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:52.152 18:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:52.152 18:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.152 18:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.152 18:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.152 18:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.152 18:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.152 18:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.152 18:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.152 18:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.152 18:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.152 18:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.152 "name": "raid_bdev1", 00:15:52.152 "uuid": "2c0e0686-a21a-4d9e-ab7a-ae8021aeab37", 00:15:52.152 "strip_size_kb": 0, 00:15:52.152 "state": "online", 00:15:52.152 "raid_level": "raid1", 00:15:52.152 "superblock": true, 00:15:52.152 "num_base_bdevs": 2, 00:15:52.152 "num_base_bdevs_discovered": 1, 00:15:52.152 "num_base_bdevs_operational": 1, 00:15:52.152 "base_bdevs_list": [ 00:15:52.152 { 00:15:52.152 "name": null, 00:15:52.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.152 "is_configured": false, 00:15:52.152 "data_offset": 0, 00:15:52.152 "data_size": 63488 00:15:52.152 }, 00:15:52.152 { 00:15:52.152 "name": "BaseBdev2", 00:15:52.152 "uuid": "fefeaef7-c4a1-5290-994b-46eb45d0f5d5", 00:15:52.152 "is_configured": true, 00:15:52.152 "data_offset": 2048, 00:15:52.152 "data_size": 63488 00:15:52.152 } 00:15:52.152 ] 00:15:52.152 }' 00:15:52.152 18:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.152 18:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.152 18:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:52.152 18:14:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.152 18:14:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.152 [2024-12-06 18:14:17.148966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:52.152 [2024-12-06 18:14:17.165592] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:15:52.152 18:14:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.152 18:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:52.152 [2024-12-06 18:14:17.168041] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:52.720 18:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:52.720 18:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:52.720 18:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:52.720 18:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:52.720 18:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:52.720 18:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.720 18:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.720 18:14:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.720 18:14:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.720 18:14:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.720 18:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:52.720 "name": "raid_bdev1", 00:15:52.720 "uuid": "2c0e0686-a21a-4d9e-ab7a-ae8021aeab37", 00:15:52.720 "strip_size_kb": 0, 00:15:52.720 "state": "online", 00:15:52.720 "raid_level": "raid1", 00:15:52.720 "superblock": true, 00:15:52.720 "num_base_bdevs": 2, 00:15:52.720 "num_base_bdevs_discovered": 2, 00:15:52.720 "num_base_bdevs_operational": 2, 00:15:52.720 "process": { 00:15:52.720 "type": "rebuild", 00:15:52.720 "target": "spare", 00:15:52.720 "progress": { 00:15:52.720 "blocks": 20480, 00:15:52.720 "percent": 32 00:15:52.720 } 00:15:52.720 }, 00:15:52.720 "base_bdevs_list": [ 00:15:52.720 { 00:15:52.720 "name": "spare", 00:15:52.720 "uuid": "1df2068e-7c44-5b3d-877f-7ec84e9bf462", 00:15:52.720 "is_configured": true, 00:15:52.720 "data_offset": 2048, 00:15:52.720 "data_size": 63488 00:15:52.720 }, 00:15:52.720 { 00:15:52.720 "name": "BaseBdev2", 00:15:52.720 "uuid": "fefeaef7-c4a1-5290-994b-46eb45d0f5d5", 00:15:52.720 "is_configured": true, 00:15:52.720 "data_offset": 2048, 00:15:52.720 "data_size": 63488 00:15:52.720 } 00:15:52.720 ] 00:15:52.720 }' 00:15:52.720 18:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:52.979 18:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:52.979 18:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:52.979 18:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:52.979 18:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:52.979 18:14:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.979 18:14:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.979 [2024-12-06 18:14:18.333191] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:52.979 [2024-12-06 18:14:18.376990] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:52.979 [2024-12-06 18:14:18.377260] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:52.979 [2024-12-06 18:14:18.377395] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:52.979 [2024-12-06 18:14:18.377452] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:52.979 18:14:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.979 18:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:52.979 18:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:52.979 18:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:52.979 18:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:52.979 18:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:52.979 18:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:52.979 18:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.979 18:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.979 18:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.979 18:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.979 18:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.979 18:14:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.979 18:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.979 18:14:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.979 18:14:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.979 18:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.979 "name": "raid_bdev1", 00:15:52.979 "uuid": "2c0e0686-a21a-4d9e-ab7a-ae8021aeab37", 00:15:52.979 "strip_size_kb": 0, 00:15:52.979 "state": "online", 00:15:52.979 "raid_level": "raid1", 00:15:52.979 "superblock": true, 00:15:52.979 "num_base_bdevs": 2, 00:15:52.979 "num_base_bdevs_discovered": 1, 00:15:52.979 "num_base_bdevs_operational": 1, 00:15:52.979 "base_bdevs_list": [ 00:15:52.979 { 00:15:52.979 "name": null, 00:15:52.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.979 "is_configured": false, 00:15:52.979 "data_offset": 0, 00:15:52.979 "data_size": 63488 00:15:52.979 }, 00:15:52.979 { 00:15:52.979 "name": "BaseBdev2", 00:15:52.979 "uuid": "fefeaef7-c4a1-5290-994b-46eb45d0f5d5", 00:15:52.979 "is_configured": true, 00:15:52.979 "data_offset": 2048, 00:15:52.979 "data_size": 63488 00:15:52.979 } 00:15:52.979 ] 00:15:52.979 }' 00:15:52.979 18:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.979 18:14:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.547 18:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:53.547 18:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:53.548 18:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:53.548 18:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:53.548 18:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:53.548 18:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.548 18:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.548 18:14:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.548 18:14:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.548 18:14:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.548 18:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:53.548 "name": "raid_bdev1", 00:15:53.548 "uuid": "2c0e0686-a21a-4d9e-ab7a-ae8021aeab37", 00:15:53.548 "strip_size_kb": 0, 00:15:53.548 "state": "online", 00:15:53.548 "raid_level": "raid1", 00:15:53.548 "superblock": true, 00:15:53.548 "num_base_bdevs": 2, 00:15:53.548 "num_base_bdevs_discovered": 1, 00:15:53.548 "num_base_bdevs_operational": 1, 00:15:53.548 "base_bdevs_list": [ 00:15:53.548 { 00:15:53.548 "name": null, 00:15:53.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.548 "is_configured": false, 00:15:53.548 "data_offset": 0, 00:15:53.548 "data_size": 63488 00:15:53.548 }, 00:15:53.548 { 00:15:53.548 "name": "BaseBdev2", 00:15:53.548 "uuid": "fefeaef7-c4a1-5290-994b-46eb45d0f5d5", 00:15:53.548 "is_configured": true, 00:15:53.548 "data_offset": 2048, 00:15:53.548 "data_size": 63488 00:15:53.548 } 00:15:53.548 ] 00:15:53.548 }' 00:15:53.548 18:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:53.548 18:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:53.548 18:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:53.806 18:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:53.806 18:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:53.806 18:14:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.806 18:14:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.806 [2024-12-06 18:14:19.122169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:53.806 [2024-12-06 18:14:19.138084] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:15:53.806 18:14:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.806 18:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:53.806 [2024-12-06 18:14:19.140632] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:54.742 18:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:54.742 18:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:54.742 18:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:54.743 18:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:54.743 18:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:54.743 18:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.743 18:14:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.743 18:14:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.743 18:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.743 18:14:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.743 18:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:54.743 "name": "raid_bdev1", 00:15:54.743 "uuid": "2c0e0686-a21a-4d9e-ab7a-ae8021aeab37", 00:15:54.743 "strip_size_kb": 0, 00:15:54.743 "state": "online", 00:15:54.743 "raid_level": "raid1", 00:15:54.743 "superblock": true, 00:15:54.743 "num_base_bdevs": 2, 00:15:54.743 "num_base_bdevs_discovered": 2, 00:15:54.743 "num_base_bdevs_operational": 2, 00:15:54.743 "process": { 00:15:54.743 "type": "rebuild", 00:15:54.743 "target": "spare", 00:15:54.743 "progress": { 00:15:54.743 "blocks": 20480, 00:15:54.743 "percent": 32 00:15:54.743 } 00:15:54.743 }, 00:15:54.743 "base_bdevs_list": [ 00:15:54.743 { 00:15:54.743 "name": "spare", 00:15:54.743 "uuid": "1df2068e-7c44-5b3d-877f-7ec84e9bf462", 00:15:54.743 "is_configured": true, 00:15:54.743 "data_offset": 2048, 00:15:54.743 "data_size": 63488 00:15:54.743 }, 00:15:54.743 { 00:15:54.743 "name": "BaseBdev2", 00:15:54.743 "uuid": "fefeaef7-c4a1-5290-994b-46eb45d0f5d5", 00:15:54.743 "is_configured": true, 00:15:54.743 "data_offset": 2048, 00:15:54.743 "data_size": 63488 00:15:54.743 } 00:15:54.743 ] 00:15:54.743 }' 00:15:54.743 18:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:54.743 18:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:54.743 18:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:55.002 18:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:55.002 18:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:55.002 18:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:55.002 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:55.002 18:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:55.002 18:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:55.002 18:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:55.002 18:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=416 00:15:55.002 18:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:55.002 18:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:55.002 18:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:55.002 18:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:55.002 18:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:55.002 18:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:55.002 18:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.002 18:14:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.002 18:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.002 18:14:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.002 18:14:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.002 18:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:55.002 "name": "raid_bdev1", 00:15:55.002 "uuid": "2c0e0686-a21a-4d9e-ab7a-ae8021aeab37", 00:15:55.002 "strip_size_kb": 0, 00:15:55.002 "state": "online", 00:15:55.002 "raid_level": "raid1", 00:15:55.002 "superblock": true, 00:15:55.002 "num_base_bdevs": 2, 00:15:55.002 "num_base_bdevs_discovered": 2, 00:15:55.002 "num_base_bdevs_operational": 2, 00:15:55.002 "process": { 00:15:55.002 "type": "rebuild", 00:15:55.002 "target": "spare", 00:15:55.002 "progress": { 00:15:55.002 "blocks": 22528, 00:15:55.002 "percent": 35 00:15:55.002 } 00:15:55.002 }, 00:15:55.002 "base_bdevs_list": [ 00:15:55.002 { 00:15:55.002 "name": "spare", 00:15:55.002 "uuid": "1df2068e-7c44-5b3d-877f-7ec84e9bf462", 00:15:55.002 "is_configured": true, 00:15:55.002 "data_offset": 2048, 00:15:55.002 "data_size": 63488 00:15:55.002 }, 00:15:55.002 { 00:15:55.002 "name": "BaseBdev2", 00:15:55.002 "uuid": "fefeaef7-c4a1-5290-994b-46eb45d0f5d5", 00:15:55.002 "is_configured": true, 00:15:55.002 "data_offset": 2048, 00:15:55.002 "data_size": 63488 00:15:55.002 } 00:15:55.002 ] 00:15:55.002 }' 00:15:55.002 18:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:55.002 18:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:55.002 18:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:55.002 18:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:55.002 18:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:55.995 18:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:55.995 18:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:55.995 18:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:55.995 18:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:55.995 18:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:55.995 18:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:55.995 18:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.995 18:14:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.995 18:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.995 18:14:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.995 18:14:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.268 18:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:56.268 "name": "raid_bdev1", 00:15:56.268 "uuid": "2c0e0686-a21a-4d9e-ab7a-ae8021aeab37", 00:15:56.268 "strip_size_kb": 0, 00:15:56.268 "state": "online", 00:15:56.268 "raid_level": "raid1", 00:15:56.268 "superblock": true, 00:15:56.268 "num_base_bdevs": 2, 00:15:56.268 "num_base_bdevs_discovered": 2, 00:15:56.268 "num_base_bdevs_operational": 2, 00:15:56.268 "process": { 00:15:56.268 "type": "rebuild", 00:15:56.268 "target": "spare", 00:15:56.268 "progress": { 00:15:56.268 "blocks": 47104, 00:15:56.268 "percent": 74 00:15:56.268 } 00:15:56.268 }, 00:15:56.268 "base_bdevs_list": [ 00:15:56.268 { 00:15:56.268 "name": "spare", 00:15:56.268 "uuid": "1df2068e-7c44-5b3d-877f-7ec84e9bf462", 00:15:56.268 "is_configured": true, 00:15:56.268 "data_offset": 2048, 00:15:56.268 "data_size": 63488 00:15:56.268 }, 00:15:56.268 { 00:15:56.268 "name": "BaseBdev2", 00:15:56.268 "uuid": "fefeaef7-c4a1-5290-994b-46eb45d0f5d5", 00:15:56.268 "is_configured": true, 00:15:56.268 "data_offset": 2048, 00:15:56.268 "data_size": 63488 00:15:56.268 } 00:15:56.268 ] 00:15:56.268 }' 00:15:56.268 18:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:56.268 18:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:56.268 18:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:56.268 18:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:56.269 18:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:56.835 [2024-12-06 18:14:22.264152] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:56.835 [2024-12-06 18:14:22.264256] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:56.835 [2024-12-06 18:14:22.264408] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.401 18:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:57.401 18:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:57.401 18:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:57.401 18:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:57.401 18:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:57.401 18:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:57.401 18:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.401 18:14:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.401 18:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.401 18:14:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.401 18:14:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.401 18:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:57.401 "name": "raid_bdev1", 00:15:57.401 "uuid": "2c0e0686-a21a-4d9e-ab7a-ae8021aeab37", 00:15:57.401 "strip_size_kb": 0, 00:15:57.401 "state": "online", 00:15:57.401 "raid_level": "raid1", 00:15:57.401 "superblock": true, 00:15:57.401 "num_base_bdevs": 2, 00:15:57.401 "num_base_bdevs_discovered": 2, 00:15:57.401 "num_base_bdevs_operational": 2, 00:15:57.401 "base_bdevs_list": [ 00:15:57.401 { 00:15:57.401 "name": "spare", 00:15:57.401 "uuid": "1df2068e-7c44-5b3d-877f-7ec84e9bf462", 00:15:57.401 "is_configured": true, 00:15:57.401 "data_offset": 2048, 00:15:57.401 "data_size": 63488 00:15:57.401 }, 00:15:57.401 { 00:15:57.401 "name": "BaseBdev2", 00:15:57.401 "uuid": "fefeaef7-c4a1-5290-994b-46eb45d0f5d5", 00:15:57.401 "is_configured": true, 00:15:57.401 "data_offset": 2048, 00:15:57.401 "data_size": 63488 00:15:57.401 } 00:15:57.401 ] 00:15:57.401 }' 00:15:57.401 18:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:57.401 18:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:57.401 18:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:57.401 18:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:57.401 18:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:57.401 18:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:57.401 18:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:57.401 18:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:57.401 18:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:57.401 18:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:57.401 18:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.401 18:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.401 18:14:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.401 18:14:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.401 18:14:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.401 18:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:57.401 "name": "raid_bdev1", 00:15:57.401 "uuid": "2c0e0686-a21a-4d9e-ab7a-ae8021aeab37", 00:15:57.401 "strip_size_kb": 0, 00:15:57.401 "state": "online", 00:15:57.401 "raid_level": "raid1", 00:15:57.401 "superblock": true, 00:15:57.401 "num_base_bdevs": 2, 00:15:57.401 "num_base_bdevs_discovered": 2, 00:15:57.401 "num_base_bdevs_operational": 2, 00:15:57.401 "base_bdevs_list": [ 00:15:57.401 { 00:15:57.401 "name": "spare", 00:15:57.401 "uuid": "1df2068e-7c44-5b3d-877f-7ec84e9bf462", 00:15:57.401 "is_configured": true, 00:15:57.401 "data_offset": 2048, 00:15:57.401 "data_size": 63488 00:15:57.401 }, 00:15:57.401 { 00:15:57.401 "name": "BaseBdev2", 00:15:57.401 "uuid": "fefeaef7-c4a1-5290-994b-46eb45d0f5d5", 00:15:57.401 "is_configured": true, 00:15:57.401 "data_offset": 2048, 00:15:57.401 "data_size": 63488 00:15:57.401 } 00:15:57.401 ] 00:15:57.401 }' 00:15:57.401 18:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:57.401 18:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:57.401 18:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:57.660 18:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:57.660 18:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:57.660 18:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:57.660 18:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.660 18:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:57.660 18:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:57.660 18:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:57.660 18:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.660 18:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.660 18:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.660 18:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.660 18:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.660 18:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.660 18:14:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.660 18:14:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.660 18:14:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.660 18:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.660 "name": "raid_bdev1", 00:15:57.660 "uuid": "2c0e0686-a21a-4d9e-ab7a-ae8021aeab37", 00:15:57.660 "strip_size_kb": 0, 00:15:57.660 "state": "online", 00:15:57.660 "raid_level": "raid1", 00:15:57.660 "superblock": true, 00:15:57.660 "num_base_bdevs": 2, 00:15:57.660 "num_base_bdevs_discovered": 2, 00:15:57.660 "num_base_bdevs_operational": 2, 00:15:57.660 "base_bdevs_list": [ 00:15:57.660 { 00:15:57.660 "name": "spare", 00:15:57.660 "uuid": "1df2068e-7c44-5b3d-877f-7ec84e9bf462", 00:15:57.660 "is_configured": true, 00:15:57.660 "data_offset": 2048, 00:15:57.660 "data_size": 63488 00:15:57.660 }, 00:15:57.660 { 00:15:57.660 "name": "BaseBdev2", 00:15:57.660 "uuid": "fefeaef7-c4a1-5290-994b-46eb45d0f5d5", 00:15:57.660 "is_configured": true, 00:15:57.660 "data_offset": 2048, 00:15:57.660 "data_size": 63488 00:15:57.660 } 00:15:57.660 ] 00:15:57.660 }' 00:15:57.660 18:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.660 18:14:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.227 18:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:58.227 18:14:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.227 18:14:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.227 [2024-12-06 18:14:23.468223] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:58.227 [2024-12-06 18:14:23.468379] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:58.227 [2024-12-06 18:14:23.468581] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:58.227 [2024-12-06 18:14:23.468702] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:58.227 [2024-12-06 18:14:23.468723] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:58.227 18:14:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.227 18:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.227 18:14:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.227 18:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:58.227 18:14:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.227 18:14:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.227 18:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:58.227 18:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:58.227 18:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:58.227 18:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:58.227 18:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:58.227 18:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:58.227 18:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:58.227 18:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:58.227 18:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:58.227 18:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:58.227 18:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:58.227 18:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:58.227 18:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:58.486 /dev/nbd0 00:15:58.486 18:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:58.486 18:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:58.486 18:14:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:58.486 18:14:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:58.486 18:14:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:58.486 18:14:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:58.486 18:14:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:58.486 18:14:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:58.486 18:14:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:58.486 18:14:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:58.486 18:14:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:58.486 1+0 records in 00:15:58.486 1+0 records out 00:15:58.486 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253556 s, 16.2 MB/s 00:15:58.486 18:14:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.486 18:14:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:58.486 18:14:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.486 18:14:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:58.486 18:14:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:58.487 18:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:58.487 18:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:58.487 18:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:58.745 /dev/nbd1 00:15:58.745 18:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:58.745 18:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:58.745 18:14:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:58.745 18:14:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:58.745 18:14:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:58.745 18:14:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:58.745 18:14:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:58.745 18:14:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:58.745 18:14:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:58.745 18:14:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:58.745 18:14:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:58.745 1+0 records in 00:15:58.745 1+0 records out 00:15:58.745 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000393247 s, 10.4 MB/s 00:15:58.745 18:14:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.745 18:14:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:58.745 18:14:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.745 18:14:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:58.745 18:14:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:58.745 18:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:58.745 18:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:58.745 18:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:59.003 18:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:59.003 18:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:59.003 18:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:59.003 18:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:59.003 18:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:59.003 18:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:59.003 18:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:59.263 18:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:59.263 18:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:59.263 18:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:59.263 18:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:59.263 18:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:59.263 18:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:59.263 18:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:59.263 18:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:59.263 18:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:59.263 18:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:59.830 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:59.830 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:59.830 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:59.830 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:59.830 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:59.830 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:59.830 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:59.830 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:59.830 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:59.830 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:59.830 18:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.830 18:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.830 18:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.830 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:59.830 18:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.830 18:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.830 [2024-12-06 18:14:25.109224] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:59.830 [2024-12-06 18:14:25.109290] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.830 [2024-12-06 18:14:25.109337] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:59.830 [2024-12-06 18:14:25.109354] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.830 [2024-12-06 18:14:25.112294] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.830 [2024-12-06 18:14:25.112465] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:59.831 [2024-12-06 18:14:25.112608] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:59.831 [2024-12-06 18:14:25.112686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:59.831 [2024-12-06 18:14:25.112907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:59.831 spare 00:15:59.831 18:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.831 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:59.831 18:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.831 18:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.831 [2024-12-06 18:14:25.213036] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:59.831 [2024-12-06 18:14:25.213093] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:59.831 [2024-12-06 18:14:25.213472] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:15:59.831 [2024-12-06 18:14:25.213768] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:59.831 [2024-12-06 18:14:25.213786] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:59.831 [2024-12-06 18:14:25.214060] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.831 18:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.831 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:59.831 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.831 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.831 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:59.831 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:59.831 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:59.831 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.831 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.831 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.831 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.831 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.831 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.831 18:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.831 18:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.831 18:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.831 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.831 "name": "raid_bdev1", 00:15:59.831 "uuid": "2c0e0686-a21a-4d9e-ab7a-ae8021aeab37", 00:15:59.831 "strip_size_kb": 0, 00:15:59.831 "state": "online", 00:15:59.831 "raid_level": "raid1", 00:15:59.831 "superblock": true, 00:15:59.831 "num_base_bdevs": 2, 00:15:59.831 "num_base_bdevs_discovered": 2, 00:15:59.831 "num_base_bdevs_operational": 2, 00:15:59.831 "base_bdevs_list": [ 00:15:59.831 { 00:15:59.831 "name": "spare", 00:15:59.831 "uuid": "1df2068e-7c44-5b3d-877f-7ec84e9bf462", 00:15:59.831 "is_configured": true, 00:15:59.831 "data_offset": 2048, 00:15:59.831 "data_size": 63488 00:15:59.831 }, 00:15:59.831 { 00:15:59.831 "name": "BaseBdev2", 00:15:59.831 "uuid": "fefeaef7-c4a1-5290-994b-46eb45d0f5d5", 00:15:59.831 "is_configured": true, 00:15:59.831 "data_offset": 2048, 00:15:59.831 "data_size": 63488 00:15:59.831 } 00:15:59.831 ] 00:15:59.831 }' 00:15:59.831 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.831 18:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.396 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:00.396 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:00.396 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:00.396 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:00.396 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:00.396 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.396 18:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.396 18:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.396 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.396 18:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.396 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:00.396 "name": "raid_bdev1", 00:16:00.396 "uuid": "2c0e0686-a21a-4d9e-ab7a-ae8021aeab37", 00:16:00.396 "strip_size_kb": 0, 00:16:00.396 "state": "online", 00:16:00.396 "raid_level": "raid1", 00:16:00.396 "superblock": true, 00:16:00.396 "num_base_bdevs": 2, 00:16:00.396 "num_base_bdevs_discovered": 2, 00:16:00.396 "num_base_bdevs_operational": 2, 00:16:00.396 "base_bdevs_list": [ 00:16:00.396 { 00:16:00.396 "name": "spare", 00:16:00.396 "uuid": "1df2068e-7c44-5b3d-877f-7ec84e9bf462", 00:16:00.396 "is_configured": true, 00:16:00.396 "data_offset": 2048, 00:16:00.396 "data_size": 63488 00:16:00.396 }, 00:16:00.396 { 00:16:00.396 "name": "BaseBdev2", 00:16:00.396 "uuid": "fefeaef7-c4a1-5290-994b-46eb45d0f5d5", 00:16:00.396 "is_configured": true, 00:16:00.396 "data_offset": 2048, 00:16:00.396 "data_size": 63488 00:16:00.396 } 00:16:00.396 ] 00:16:00.396 }' 00:16:00.396 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:00.396 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:00.396 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:00.396 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:00.396 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.396 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:00.396 18:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.396 18:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.656 18:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.656 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:00.656 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:00.656 18:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.656 18:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.656 [2024-12-06 18:14:25.958272] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:00.656 18:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.656 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:00.656 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:00.656 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:00.656 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:00.656 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:00.656 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:00.656 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.656 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.656 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.656 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.656 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.656 18:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.656 18:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.656 18:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.656 18:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.656 18:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.656 "name": "raid_bdev1", 00:16:00.656 "uuid": "2c0e0686-a21a-4d9e-ab7a-ae8021aeab37", 00:16:00.656 "strip_size_kb": 0, 00:16:00.656 "state": "online", 00:16:00.656 "raid_level": "raid1", 00:16:00.656 "superblock": true, 00:16:00.656 "num_base_bdevs": 2, 00:16:00.656 "num_base_bdevs_discovered": 1, 00:16:00.656 "num_base_bdevs_operational": 1, 00:16:00.656 "base_bdevs_list": [ 00:16:00.656 { 00:16:00.656 "name": null, 00:16:00.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.656 "is_configured": false, 00:16:00.656 "data_offset": 0, 00:16:00.656 "data_size": 63488 00:16:00.656 }, 00:16:00.656 { 00:16:00.656 "name": "BaseBdev2", 00:16:00.656 "uuid": "fefeaef7-c4a1-5290-994b-46eb45d0f5d5", 00:16:00.656 "is_configured": true, 00:16:00.656 "data_offset": 2048, 00:16:00.656 "data_size": 63488 00:16:00.656 } 00:16:00.656 ] 00:16:00.656 }' 00:16:00.656 18:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.656 18:14:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.223 18:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:01.223 18:14:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.223 18:14:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.223 [2024-12-06 18:14:26.478518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:01.223 [2024-12-06 18:14:26.478810] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:01.223 [2024-12-06 18:14:26.478846] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:01.223 [2024-12-06 18:14:26.478897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:01.223 [2024-12-06 18:14:26.495246] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:16:01.223 18:14:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.223 18:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:01.223 [2024-12-06 18:14:26.497930] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:02.233 18:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:02.233 18:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.233 18:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:02.233 18:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:02.233 18:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:02.233 18:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.233 18:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.233 18:14:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.233 18:14:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.233 18:14:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.233 18:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:02.233 "name": "raid_bdev1", 00:16:02.233 "uuid": "2c0e0686-a21a-4d9e-ab7a-ae8021aeab37", 00:16:02.233 "strip_size_kb": 0, 00:16:02.233 "state": "online", 00:16:02.233 "raid_level": "raid1", 00:16:02.233 "superblock": true, 00:16:02.233 "num_base_bdevs": 2, 00:16:02.233 "num_base_bdevs_discovered": 2, 00:16:02.233 "num_base_bdevs_operational": 2, 00:16:02.233 "process": { 00:16:02.233 "type": "rebuild", 00:16:02.233 "target": "spare", 00:16:02.233 "progress": { 00:16:02.233 "blocks": 20480, 00:16:02.233 "percent": 32 00:16:02.233 } 00:16:02.233 }, 00:16:02.233 "base_bdevs_list": [ 00:16:02.233 { 00:16:02.233 "name": "spare", 00:16:02.233 "uuid": "1df2068e-7c44-5b3d-877f-7ec84e9bf462", 00:16:02.233 "is_configured": true, 00:16:02.233 "data_offset": 2048, 00:16:02.233 "data_size": 63488 00:16:02.233 }, 00:16:02.233 { 00:16:02.233 "name": "BaseBdev2", 00:16:02.233 "uuid": "fefeaef7-c4a1-5290-994b-46eb45d0f5d5", 00:16:02.233 "is_configured": true, 00:16:02.233 "data_offset": 2048, 00:16:02.233 "data_size": 63488 00:16:02.233 } 00:16:02.233 ] 00:16:02.233 }' 00:16:02.233 18:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:02.233 18:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:02.234 18:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:02.234 18:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:02.234 18:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:02.234 18:14:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.234 18:14:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.234 [2024-12-06 18:14:27.667345] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:02.234 [2024-12-06 18:14:27.706274] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:02.234 [2024-12-06 18:14:27.706382] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:02.234 [2024-12-06 18:14:27.706414] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:02.234 [2024-12-06 18:14:27.706428] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:02.234 18:14:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.234 18:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:02.234 18:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:02.234 18:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:02.234 18:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:02.234 18:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:02.234 18:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:02.234 18:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.234 18:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.234 18:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.234 18:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.234 18:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.234 18:14:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.234 18:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.234 18:14:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.492 18:14:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.492 18:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.492 "name": "raid_bdev1", 00:16:02.492 "uuid": "2c0e0686-a21a-4d9e-ab7a-ae8021aeab37", 00:16:02.492 "strip_size_kb": 0, 00:16:02.492 "state": "online", 00:16:02.492 "raid_level": "raid1", 00:16:02.492 "superblock": true, 00:16:02.492 "num_base_bdevs": 2, 00:16:02.492 "num_base_bdevs_discovered": 1, 00:16:02.492 "num_base_bdevs_operational": 1, 00:16:02.492 "base_bdevs_list": [ 00:16:02.492 { 00:16:02.492 "name": null, 00:16:02.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.492 "is_configured": false, 00:16:02.492 "data_offset": 0, 00:16:02.492 "data_size": 63488 00:16:02.492 }, 00:16:02.492 { 00:16:02.492 "name": "BaseBdev2", 00:16:02.492 "uuid": "fefeaef7-c4a1-5290-994b-46eb45d0f5d5", 00:16:02.492 "is_configured": true, 00:16:02.492 "data_offset": 2048, 00:16:02.492 "data_size": 63488 00:16:02.492 } 00:16:02.492 ] 00:16:02.492 }' 00:16:02.492 18:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.492 18:14:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.059 18:14:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:03.059 18:14:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.059 18:14:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.059 [2024-12-06 18:14:28.304843] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:03.059 [2024-12-06 18:14:28.304925] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.059 [2024-12-06 18:14:28.304957] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:03.059 [2024-12-06 18:14:28.304975] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.059 [2024-12-06 18:14:28.305580] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.059 [2024-12-06 18:14:28.305633] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:03.059 [2024-12-06 18:14:28.305749] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:03.059 [2024-12-06 18:14:28.305789] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:03.059 [2024-12-06 18:14:28.305806] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:03.059 [2024-12-06 18:14:28.305841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:03.059 [2024-12-06 18:14:28.321209] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:16:03.059 spare 00:16:03.059 18:14:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.059 18:14:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:03.059 [2024-12-06 18:14:28.323732] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:03.995 18:14:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:03.995 18:14:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:03.995 18:14:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:03.995 18:14:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:03.995 18:14:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:03.995 18:14:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.995 18:14:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.995 18:14:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.995 18:14:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.995 18:14:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.995 18:14:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:03.995 "name": "raid_bdev1", 00:16:03.995 "uuid": "2c0e0686-a21a-4d9e-ab7a-ae8021aeab37", 00:16:03.995 "strip_size_kb": 0, 00:16:03.995 "state": "online", 00:16:03.995 "raid_level": "raid1", 00:16:03.995 "superblock": true, 00:16:03.995 "num_base_bdevs": 2, 00:16:03.995 "num_base_bdevs_discovered": 2, 00:16:03.995 "num_base_bdevs_operational": 2, 00:16:03.995 "process": { 00:16:03.995 "type": "rebuild", 00:16:03.995 "target": "spare", 00:16:03.995 "progress": { 00:16:03.995 "blocks": 20480, 00:16:03.995 "percent": 32 00:16:03.995 } 00:16:03.995 }, 00:16:03.995 "base_bdevs_list": [ 00:16:03.995 { 00:16:03.995 "name": "spare", 00:16:03.995 "uuid": "1df2068e-7c44-5b3d-877f-7ec84e9bf462", 00:16:03.995 "is_configured": true, 00:16:03.995 "data_offset": 2048, 00:16:03.995 "data_size": 63488 00:16:03.995 }, 00:16:03.995 { 00:16:03.995 "name": "BaseBdev2", 00:16:03.995 "uuid": "fefeaef7-c4a1-5290-994b-46eb45d0f5d5", 00:16:03.995 "is_configured": true, 00:16:03.995 "data_offset": 2048, 00:16:03.995 "data_size": 63488 00:16:03.995 } 00:16:03.995 ] 00:16:03.995 }' 00:16:03.995 18:14:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:03.995 18:14:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:03.995 18:14:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:03.995 18:14:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:03.995 18:14:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:03.995 18:14:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.995 18:14:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.995 [2024-12-06 18:14:29.493457] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:04.254 [2024-12-06 18:14:29.532443] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:04.254 [2024-12-06 18:14:29.532523] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.254 [2024-12-06 18:14:29.532551] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:04.254 [2024-12-06 18:14:29.532562] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:04.254 18:14:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.254 18:14:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:04.254 18:14:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.254 18:14:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.254 18:14:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:04.254 18:14:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:04.254 18:14:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:04.254 18:14:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.254 18:14:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.254 18:14:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.254 18:14:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.254 18:14:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.254 18:14:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.254 18:14:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.254 18:14:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.254 18:14:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.254 18:14:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.254 "name": "raid_bdev1", 00:16:04.254 "uuid": "2c0e0686-a21a-4d9e-ab7a-ae8021aeab37", 00:16:04.254 "strip_size_kb": 0, 00:16:04.254 "state": "online", 00:16:04.254 "raid_level": "raid1", 00:16:04.254 "superblock": true, 00:16:04.254 "num_base_bdevs": 2, 00:16:04.254 "num_base_bdevs_discovered": 1, 00:16:04.254 "num_base_bdevs_operational": 1, 00:16:04.254 "base_bdevs_list": [ 00:16:04.254 { 00:16:04.254 "name": null, 00:16:04.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.254 "is_configured": false, 00:16:04.254 "data_offset": 0, 00:16:04.254 "data_size": 63488 00:16:04.254 }, 00:16:04.254 { 00:16:04.254 "name": "BaseBdev2", 00:16:04.254 "uuid": "fefeaef7-c4a1-5290-994b-46eb45d0f5d5", 00:16:04.254 "is_configured": true, 00:16:04.254 "data_offset": 2048, 00:16:04.254 "data_size": 63488 00:16:04.254 } 00:16:04.254 ] 00:16:04.254 }' 00:16:04.254 18:14:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.254 18:14:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.821 18:14:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:04.821 18:14:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.821 18:14:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:04.821 18:14:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:04.821 18:14:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.821 18:14:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.821 18:14:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.821 18:14:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.821 18:14:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.821 18:14:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.821 18:14:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.821 "name": "raid_bdev1", 00:16:04.821 "uuid": "2c0e0686-a21a-4d9e-ab7a-ae8021aeab37", 00:16:04.821 "strip_size_kb": 0, 00:16:04.821 "state": "online", 00:16:04.821 "raid_level": "raid1", 00:16:04.821 "superblock": true, 00:16:04.821 "num_base_bdevs": 2, 00:16:04.821 "num_base_bdevs_discovered": 1, 00:16:04.821 "num_base_bdevs_operational": 1, 00:16:04.821 "base_bdevs_list": [ 00:16:04.821 { 00:16:04.821 "name": null, 00:16:04.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.821 "is_configured": false, 00:16:04.821 "data_offset": 0, 00:16:04.821 "data_size": 63488 00:16:04.821 }, 00:16:04.821 { 00:16:04.821 "name": "BaseBdev2", 00:16:04.821 "uuid": "fefeaef7-c4a1-5290-994b-46eb45d0f5d5", 00:16:04.821 "is_configured": true, 00:16:04.821 "data_offset": 2048, 00:16:04.821 "data_size": 63488 00:16:04.821 } 00:16:04.821 ] 00:16:04.821 }' 00:16:04.821 18:14:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.821 18:14:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:04.821 18:14:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.821 18:14:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:04.821 18:14:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:04.821 18:14:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.821 18:14:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.821 18:14:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.821 18:14:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:04.821 18:14:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.821 18:14:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.821 [2024-12-06 18:14:30.256872] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:04.821 [2024-12-06 18:14:30.256940] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.821 [2024-12-06 18:14:30.256987] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:04.821 [2024-12-06 18:14:30.257014] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.821 [2024-12-06 18:14:30.257563] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.821 [2024-12-06 18:14:30.257606] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:04.821 [2024-12-06 18:14:30.257707] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:04.821 [2024-12-06 18:14:30.257728] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:04.821 [2024-12-06 18:14:30.257763] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:04.821 [2024-12-06 18:14:30.257796] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:04.821 BaseBdev1 00:16:04.821 18:14:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.821 18:14:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:05.755 18:14:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:05.755 18:14:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.755 18:14:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:05.755 18:14:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:05.755 18:14:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:05.755 18:14:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:05.755 18:14:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.755 18:14:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.755 18:14:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.755 18:14:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.755 18:14:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.755 18:14:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.755 18:14:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.755 18:14:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.015 18:14:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.015 18:14:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.015 "name": "raid_bdev1", 00:16:06.015 "uuid": "2c0e0686-a21a-4d9e-ab7a-ae8021aeab37", 00:16:06.015 "strip_size_kb": 0, 00:16:06.015 "state": "online", 00:16:06.015 "raid_level": "raid1", 00:16:06.015 "superblock": true, 00:16:06.015 "num_base_bdevs": 2, 00:16:06.015 "num_base_bdevs_discovered": 1, 00:16:06.015 "num_base_bdevs_operational": 1, 00:16:06.015 "base_bdevs_list": [ 00:16:06.015 { 00:16:06.015 "name": null, 00:16:06.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.015 "is_configured": false, 00:16:06.015 "data_offset": 0, 00:16:06.015 "data_size": 63488 00:16:06.015 }, 00:16:06.015 { 00:16:06.015 "name": "BaseBdev2", 00:16:06.015 "uuid": "fefeaef7-c4a1-5290-994b-46eb45d0f5d5", 00:16:06.015 "is_configured": true, 00:16:06.015 "data_offset": 2048, 00:16:06.015 "data_size": 63488 00:16:06.015 } 00:16:06.015 ] 00:16:06.015 }' 00:16:06.015 18:14:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.015 18:14:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.584 18:14:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:06.584 18:14:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.584 18:14:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:06.584 18:14:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:06.584 18:14:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.584 18:14:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.584 18:14:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.584 18:14:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.584 18:14:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.584 18:14:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.584 18:14:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.584 "name": "raid_bdev1", 00:16:06.584 "uuid": "2c0e0686-a21a-4d9e-ab7a-ae8021aeab37", 00:16:06.584 "strip_size_kb": 0, 00:16:06.584 "state": "online", 00:16:06.584 "raid_level": "raid1", 00:16:06.584 "superblock": true, 00:16:06.584 "num_base_bdevs": 2, 00:16:06.584 "num_base_bdevs_discovered": 1, 00:16:06.584 "num_base_bdevs_operational": 1, 00:16:06.584 "base_bdevs_list": [ 00:16:06.584 { 00:16:06.584 "name": null, 00:16:06.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.584 "is_configured": false, 00:16:06.584 "data_offset": 0, 00:16:06.584 "data_size": 63488 00:16:06.584 }, 00:16:06.584 { 00:16:06.584 "name": "BaseBdev2", 00:16:06.584 "uuid": "fefeaef7-c4a1-5290-994b-46eb45d0f5d5", 00:16:06.584 "is_configured": true, 00:16:06.584 "data_offset": 2048, 00:16:06.584 "data_size": 63488 00:16:06.584 } 00:16:06.584 ] 00:16:06.584 }' 00:16:06.584 18:14:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.585 18:14:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:06.585 18:14:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.585 18:14:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:06.585 18:14:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:06.585 18:14:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:16:06.585 18:14:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:06.585 18:14:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:06.585 18:14:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:06.585 18:14:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:06.585 18:14:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:06.585 18:14:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:06.585 18:14:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.585 18:14:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.585 [2024-12-06 18:14:31.985628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:06.585 [2024-12-06 18:14:31.985896] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:06.585 [2024-12-06 18:14:31.985925] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:06.585 request: 00:16:06.585 { 00:16:06.585 "base_bdev": "BaseBdev1", 00:16:06.585 "raid_bdev": "raid_bdev1", 00:16:06.585 "method": "bdev_raid_add_base_bdev", 00:16:06.585 "req_id": 1 00:16:06.585 } 00:16:06.585 Got JSON-RPC error response 00:16:06.585 response: 00:16:06.585 { 00:16:06.585 "code": -22, 00:16:06.585 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:06.585 } 00:16:06.585 18:14:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:06.585 18:14:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:16:06.585 18:14:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:06.585 18:14:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:06.585 18:14:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:06.585 18:14:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:07.569 18:14:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:07.569 18:14:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:07.569 18:14:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.569 18:14:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:07.569 18:14:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:07.569 18:14:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:07.569 18:14:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.569 18:14:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.569 18:14:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.569 18:14:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.569 18:14:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.569 18:14:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.569 18:14:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.569 18:14:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.569 18:14:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.569 18:14:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.569 "name": "raid_bdev1", 00:16:07.569 "uuid": "2c0e0686-a21a-4d9e-ab7a-ae8021aeab37", 00:16:07.569 "strip_size_kb": 0, 00:16:07.569 "state": "online", 00:16:07.569 "raid_level": "raid1", 00:16:07.569 "superblock": true, 00:16:07.569 "num_base_bdevs": 2, 00:16:07.569 "num_base_bdevs_discovered": 1, 00:16:07.569 "num_base_bdevs_operational": 1, 00:16:07.569 "base_bdevs_list": [ 00:16:07.569 { 00:16:07.569 "name": null, 00:16:07.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.569 "is_configured": false, 00:16:07.569 "data_offset": 0, 00:16:07.569 "data_size": 63488 00:16:07.569 }, 00:16:07.569 { 00:16:07.569 "name": "BaseBdev2", 00:16:07.569 "uuid": "fefeaef7-c4a1-5290-994b-46eb45d0f5d5", 00:16:07.569 "is_configured": true, 00:16:07.569 "data_offset": 2048, 00:16:07.569 "data_size": 63488 00:16:07.569 } 00:16:07.569 ] 00:16:07.569 }' 00:16:07.569 18:14:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.569 18:14:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.138 18:14:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:08.138 18:14:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.138 18:14:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:08.138 18:14:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:08.138 18:14:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.138 18:14:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.138 18:14:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.138 18:14:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.138 18:14:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.138 18:14:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.138 18:14:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.138 "name": "raid_bdev1", 00:16:08.138 "uuid": "2c0e0686-a21a-4d9e-ab7a-ae8021aeab37", 00:16:08.138 "strip_size_kb": 0, 00:16:08.138 "state": "online", 00:16:08.138 "raid_level": "raid1", 00:16:08.138 "superblock": true, 00:16:08.138 "num_base_bdevs": 2, 00:16:08.138 "num_base_bdevs_discovered": 1, 00:16:08.138 "num_base_bdevs_operational": 1, 00:16:08.138 "base_bdevs_list": [ 00:16:08.138 { 00:16:08.138 "name": null, 00:16:08.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.138 "is_configured": false, 00:16:08.138 "data_offset": 0, 00:16:08.138 "data_size": 63488 00:16:08.138 }, 00:16:08.138 { 00:16:08.138 "name": "BaseBdev2", 00:16:08.138 "uuid": "fefeaef7-c4a1-5290-994b-46eb45d0f5d5", 00:16:08.138 "is_configured": true, 00:16:08.138 "data_offset": 2048, 00:16:08.138 "data_size": 63488 00:16:08.138 } 00:16:08.138 ] 00:16:08.138 }' 00:16:08.138 18:14:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.138 18:14:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:08.138 18:14:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.397 18:14:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:08.397 18:14:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75970 00:16:08.397 18:14:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75970 ']' 00:16:08.397 18:14:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 75970 00:16:08.397 18:14:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:08.397 18:14:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:08.397 18:14:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75970 00:16:08.397 killing process with pid 75970 00:16:08.397 Received shutdown signal, test time was about 60.000000 seconds 00:16:08.397 00:16:08.397 Latency(us) 00:16:08.397 [2024-12-06T18:14:33.917Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:08.397 [2024-12-06T18:14:33.917Z] =================================================================================================================== 00:16:08.397 [2024-12-06T18:14:33.917Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:08.397 18:14:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:08.397 18:14:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:08.397 18:14:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75970' 00:16:08.397 18:14:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 75970 00:16:08.397 [2024-12-06 18:14:33.713431] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:08.397 18:14:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 75970 00:16:08.397 [2024-12-06 18:14:33.713589] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:08.397 [2024-12-06 18:14:33.713665] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:08.397 [2024-12-06 18:14:33.713685] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:08.656 [2024-12-06 18:14:33.996589] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:09.593 18:14:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:09.593 00:16:09.593 real 0m27.754s 00:16:09.593 user 0m34.253s 00:16:09.593 sys 0m4.389s 00:16:09.593 18:14:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:09.593 18:14:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.593 ************************************ 00:16:09.593 END TEST raid_rebuild_test_sb 00:16:09.593 ************************************ 00:16:09.851 18:14:35 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:16:09.851 18:14:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:09.851 18:14:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:09.851 18:14:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:09.851 ************************************ 00:16:09.851 START TEST raid_rebuild_test_io 00:16:09.851 ************************************ 00:16:09.851 18:14:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:16:09.851 18:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:09.851 18:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:09.851 18:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:09.851 18:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:16:09.851 18:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:09.851 18:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:09.851 18:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:09.851 18:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:09.851 18:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:09.851 18:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:09.851 18:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:09.851 18:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:09.851 18:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:09.851 18:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:09.851 18:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:09.851 18:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:09.851 18:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:09.851 18:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:09.851 18:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:09.851 18:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:09.851 18:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:09.851 18:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:09.851 18:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:09.851 18:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76744 00:16:09.851 18:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76744 00:16:09.851 18:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:09.851 18:14:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76744 ']' 00:16:09.851 18:14:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.851 18:14:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:09.851 18:14:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.851 18:14:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:09.851 18:14:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.851 [2024-12-06 18:14:35.266565] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:16:09.851 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:09.851 Zero copy mechanism will not be used. 00:16:09.851 [2024-12-06 18:14:35.266869] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76744 ] 00:16:10.108 [2024-12-06 18:14:35.441659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:10.108 [2024-12-06 18:14:35.576751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.365 [2024-12-06 18:14:35.791935] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:10.365 [2024-12-06 18:14:35.792220] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:10.932 18:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:10.932 18:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:16:10.932 18:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:10.932 18:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:10.932 18:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.932 18:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.932 BaseBdev1_malloc 00:16:10.932 18:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.932 18:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:10.932 18:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.932 18:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.932 [2024-12-06 18:14:36.339820] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:10.932 [2024-12-06 18:14:36.339905] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.932 [2024-12-06 18:14:36.339936] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:10.932 [2024-12-06 18:14:36.339954] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.932 [2024-12-06 18:14:36.342677] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.932 [2024-12-06 18:14:36.342730] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:10.932 BaseBdev1 00:16:10.932 18:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.932 18:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:10.932 18:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:10.932 18:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.932 18:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.932 BaseBdev2_malloc 00:16:10.932 18:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.932 18:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:10.932 18:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.932 18:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.932 [2024-12-06 18:14:36.397310] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:10.932 [2024-12-06 18:14:36.397390] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.932 [2024-12-06 18:14:36.397423] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:10.932 [2024-12-06 18:14:36.397441] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.932 [2024-12-06 18:14:36.400296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.932 [2024-12-06 18:14:36.400347] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:10.932 BaseBdev2 00:16:10.932 18:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.932 18:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:10.932 18:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.932 18:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.191 spare_malloc 00:16:11.191 18:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.191 18:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:11.191 18:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.191 18:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.191 spare_delay 00:16:11.192 18:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.192 18:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:11.192 18:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.192 18:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.192 [2024-12-06 18:14:36.465568] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:11.192 [2024-12-06 18:14:36.465647] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.192 [2024-12-06 18:14:36.465678] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:11.192 [2024-12-06 18:14:36.465696] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.192 [2024-12-06 18:14:36.468457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.192 [2024-12-06 18:14:36.468509] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:11.192 spare 00:16:11.192 18:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.192 18:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:11.192 18:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.192 18:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.192 [2024-12-06 18:14:36.473633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:11.192 [2024-12-06 18:14:36.475995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:11.192 [2024-12-06 18:14:36.476116] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:11.192 [2024-12-06 18:14:36.476138] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:11.192 [2024-12-06 18:14:36.476495] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:11.192 [2024-12-06 18:14:36.476702] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:11.192 [2024-12-06 18:14:36.476720] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:11.192 [2024-12-06 18:14:36.476929] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.192 18:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.192 18:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:11.192 18:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:11.192 18:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:11.192 18:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:11.192 18:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:11.192 18:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:11.192 18:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.192 18:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.192 18:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.192 18:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.192 18:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.192 18:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.192 18:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.192 18:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.192 18:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.192 18:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.192 "name": "raid_bdev1", 00:16:11.192 "uuid": "37f34395-5ecb-436f-ab94-e79ed1afe3c5", 00:16:11.192 "strip_size_kb": 0, 00:16:11.192 "state": "online", 00:16:11.192 "raid_level": "raid1", 00:16:11.192 "superblock": false, 00:16:11.192 "num_base_bdevs": 2, 00:16:11.192 "num_base_bdevs_discovered": 2, 00:16:11.192 "num_base_bdevs_operational": 2, 00:16:11.192 "base_bdevs_list": [ 00:16:11.192 { 00:16:11.192 "name": "BaseBdev1", 00:16:11.192 "uuid": "989c2009-ddfd-5206-8a38-628bd2182aab", 00:16:11.192 "is_configured": true, 00:16:11.192 "data_offset": 0, 00:16:11.192 "data_size": 65536 00:16:11.192 }, 00:16:11.192 { 00:16:11.192 "name": "BaseBdev2", 00:16:11.192 "uuid": "485128a7-507d-522e-b60a-95598faa8157", 00:16:11.192 "is_configured": true, 00:16:11.192 "data_offset": 0, 00:16:11.192 "data_size": 65536 00:16:11.192 } 00:16:11.192 ] 00:16:11.192 }' 00:16:11.192 18:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.192 18:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.761 18:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:11.761 18:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:11.761 18:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.761 18:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.761 [2024-12-06 18:14:36.986161] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:11.761 18:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.761 18:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:16:11.761 18:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.761 18:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:11.761 18:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.761 18:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.761 18:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.761 18:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:11.761 18:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:16:11.761 18:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:11.761 18:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:11.761 18:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.761 18:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.761 [2024-12-06 18:14:37.085772] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:11.761 18:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.761 18:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:11.761 18:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:11.761 18:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:11.761 18:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:11.761 18:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:11.761 18:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:11.761 18:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.761 18:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.761 18:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.761 18:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.761 18:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.761 18:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.761 18:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.761 18:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.761 18:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.761 18:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.761 "name": "raid_bdev1", 00:16:11.761 "uuid": "37f34395-5ecb-436f-ab94-e79ed1afe3c5", 00:16:11.761 "strip_size_kb": 0, 00:16:11.761 "state": "online", 00:16:11.761 "raid_level": "raid1", 00:16:11.761 "superblock": false, 00:16:11.761 "num_base_bdevs": 2, 00:16:11.761 "num_base_bdevs_discovered": 1, 00:16:11.761 "num_base_bdevs_operational": 1, 00:16:11.761 "base_bdevs_list": [ 00:16:11.761 { 00:16:11.761 "name": null, 00:16:11.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.761 "is_configured": false, 00:16:11.761 "data_offset": 0, 00:16:11.761 "data_size": 65536 00:16:11.761 }, 00:16:11.761 { 00:16:11.761 "name": "BaseBdev2", 00:16:11.761 "uuid": "485128a7-507d-522e-b60a-95598faa8157", 00:16:11.761 "is_configured": true, 00:16:11.761 "data_offset": 0, 00:16:11.761 "data_size": 65536 00:16:11.761 } 00:16:11.761 ] 00:16:11.761 }' 00:16:11.761 18:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.761 18:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.761 [2024-12-06 18:14:37.205939] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:11.761 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:11.761 Zero copy mechanism will not be used. 00:16:11.761 Running I/O for 60 seconds... 00:16:12.328 18:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:12.328 18:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.328 18:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.328 [2024-12-06 18:14:37.591595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:12.328 18:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.328 18:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:12.328 [2024-12-06 18:14:37.678356] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:12.328 [2024-12-06 18:14:37.681046] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:12.328 [2024-12-06 18:14:37.807900] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:12.328 [2024-12-06 18:14:37.808447] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:12.597 [2024-12-06 18:14:38.083741] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:13.113 166.00 IOPS, 498.00 MiB/s [2024-12-06T18:14:38.633Z] [2024-12-06 18:14:38.408611] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:13.113 [2024-12-06 18:14:38.409448] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:13.371 18:14:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:13.371 18:14:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.371 18:14:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:13.371 18:14:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:13.371 18:14:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.371 18:14:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.371 18:14:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.371 18:14:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.371 18:14:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:13.371 18:14:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.371 18:14:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.371 "name": "raid_bdev1", 00:16:13.371 "uuid": "37f34395-5ecb-436f-ab94-e79ed1afe3c5", 00:16:13.371 "strip_size_kb": 0, 00:16:13.371 "state": "online", 00:16:13.371 "raid_level": "raid1", 00:16:13.371 "superblock": false, 00:16:13.371 "num_base_bdevs": 2, 00:16:13.371 "num_base_bdevs_discovered": 2, 00:16:13.371 "num_base_bdevs_operational": 2, 00:16:13.371 "process": { 00:16:13.371 "type": "rebuild", 00:16:13.371 "target": "spare", 00:16:13.371 "progress": { 00:16:13.371 "blocks": 8192, 00:16:13.371 "percent": 12 00:16:13.371 } 00:16:13.371 }, 00:16:13.371 "base_bdevs_list": [ 00:16:13.371 { 00:16:13.371 "name": "spare", 00:16:13.371 "uuid": "08bdc3f8-c520-5885-930d-0724efa2522c", 00:16:13.371 "is_configured": true, 00:16:13.371 "data_offset": 0, 00:16:13.371 "data_size": 65536 00:16:13.371 }, 00:16:13.371 { 00:16:13.371 "name": "BaseBdev2", 00:16:13.371 "uuid": "485128a7-507d-522e-b60a-95598faa8157", 00:16:13.371 "is_configured": true, 00:16:13.371 "data_offset": 0, 00:16:13.371 "data_size": 65536 00:16:13.371 } 00:16:13.371 ] 00:16:13.371 }' 00:16:13.371 18:14:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.371 18:14:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:13.371 18:14:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.371 18:14:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:13.371 18:14:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:13.371 18:14:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.371 18:14:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:13.371 [2024-12-06 18:14:38.818362] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:13.371 [2024-12-06 18:14:38.867151] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:13.371 [2024-12-06 18:14:38.867803] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:13.371 [2024-12-06 18:14:38.868858] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:13.371 [2024-12-06 18:14:38.879137] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:13.371 [2024-12-06 18:14:38.879325] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:13.371 [2024-12-06 18:14:38.879379] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:13.629 [2024-12-06 18:14:38.932160] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:16:13.629 18:14:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.629 18:14:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:13.629 18:14:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:13.629 18:14:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:13.629 18:14:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:13.629 18:14:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:13.629 18:14:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:13.629 18:14:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.629 18:14:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.629 18:14:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.629 18:14:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.629 18:14:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.629 18:14:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.629 18:14:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.629 18:14:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:13.629 18:14:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.629 18:14:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.629 "name": "raid_bdev1", 00:16:13.629 "uuid": "37f34395-5ecb-436f-ab94-e79ed1afe3c5", 00:16:13.629 "strip_size_kb": 0, 00:16:13.629 "state": "online", 00:16:13.629 "raid_level": "raid1", 00:16:13.629 "superblock": false, 00:16:13.629 "num_base_bdevs": 2, 00:16:13.629 "num_base_bdevs_discovered": 1, 00:16:13.629 "num_base_bdevs_operational": 1, 00:16:13.629 "base_bdevs_list": [ 00:16:13.629 { 00:16:13.629 "name": null, 00:16:13.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.629 "is_configured": false, 00:16:13.629 "data_offset": 0, 00:16:13.629 "data_size": 65536 00:16:13.629 }, 00:16:13.629 { 00:16:13.629 "name": "BaseBdev2", 00:16:13.629 "uuid": "485128a7-507d-522e-b60a-95598faa8157", 00:16:13.629 "is_configured": true, 00:16:13.629 "data_offset": 0, 00:16:13.629 "data_size": 65536 00:16:13.629 } 00:16:13.629 ] 00:16:13.629 }' 00:16:13.629 18:14:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.629 18:14:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.145 136.00 IOPS, 408.00 MiB/s [2024-12-06T18:14:39.665Z] 18:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:14.145 18:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.145 18:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:14.145 18:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:14.146 18:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.146 18:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.146 18:14:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.146 18:14:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.146 18:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.146 18:14:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.146 18:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.146 "name": "raid_bdev1", 00:16:14.146 "uuid": "37f34395-5ecb-436f-ab94-e79ed1afe3c5", 00:16:14.146 "strip_size_kb": 0, 00:16:14.146 "state": "online", 00:16:14.146 "raid_level": "raid1", 00:16:14.146 "superblock": false, 00:16:14.146 "num_base_bdevs": 2, 00:16:14.146 "num_base_bdevs_discovered": 1, 00:16:14.146 "num_base_bdevs_operational": 1, 00:16:14.146 "base_bdevs_list": [ 00:16:14.146 { 00:16:14.146 "name": null, 00:16:14.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.146 "is_configured": false, 00:16:14.146 "data_offset": 0, 00:16:14.146 "data_size": 65536 00:16:14.146 }, 00:16:14.146 { 00:16:14.146 "name": "BaseBdev2", 00:16:14.146 "uuid": "485128a7-507d-522e-b60a-95598faa8157", 00:16:14.146 "is_configured": true, 00:16:14.146 "data_offset": 0, 00:16:14.146 "data_size": 65536 00:16:14.146 } 00:16:14.146 ] 00:16:14.146 }' 00:16:14.146 18:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.146 18:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:14.146 18:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.146 18:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:14.146 18:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:14.146 18:14:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.146 18:14:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.146 [2024-12-06 18:14:39.626202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:14.404 18:14:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.404 18:14:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:14.404 [2024-12-06 18:14:39.689199] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:14.404 [2024-12-06 18:14:39.692052] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:14.404 [2024-12-06 18:14:39.794465] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:14.404 [2024-12-06 18:14:39.795329] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:14.404 [2024-12-06 18:14:39.907517] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:14.404 [2024-12-06 18:14:39.908086] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:14.970 148.00 IOPS, 444.00 MiB/s [2024-12-06T18:14:40.490Z] [2024-12-06 18:14:40.232365] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:14.970 [2024-12-06 18:14:40.232976] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:15.227 [2024-12-06 18:14:40.585889] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:15.227 18:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:15.227 18:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.227 18:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:15.227 18:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:15.227 18:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.227 18:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.227 18:14:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.227 18:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.227 18:14:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.227 [2024-12-06 18:14:40.696494] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:15.227 18:14:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.227 18:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.227 "name": "raid_bdev1", 00:16:15.227 "uuid": "37f34395-5ecb-436f-ab94-e79ed1afe3c5", 00:16:15.227 "strip_size_kb": 0, 00:16:15.227 "state": "online", 00:16:15.227 "raid_level": "raid1", 00:16:15.227 "superblock": false, 00:16:15.227 "num_base_bdevs": 2, 00:16:15.227 "num_base_bdevs_discovered": 2, 00:16:15.227 "num_base_bdevs_operational": 2, 00:16:15.227 "process": { 00:16:15.227 "type": "rebuild", 00:16:15.227 "target": "spare", 00:16:15.227 "progress": { 00:16:15.227 "blocks": 14336, 00:16:15.227 "percent": 21 00:16:15.227 } 00:16:15.227 }, 00:16:15.227 "base_bdevs_list": [ 00:16:15.227 { 00:16:15.227 "name": "spare", 00:16:15.227 "uuid": "08bdc3f8-c520-5885-930d-0724efa2522c", 00:16:15.227 "is_configured": true, 00:16:15.227 "data_offset": 0, 00:16:15.227 "data_size": 65536 00:16:15.227 }, 00:16:15.227 { 00:16:15.227 "name": "BaseBdev2", 00:16:15.227 "uuid": "485128a7-507d-522e-b60a-95598faa8157", 00:16:15.227 "is_configured": true, 00:16:15.227 "data_offset": 0, 00:16:15.227 "data_size": 65536 00:16:15.227 } 00:16:15.227 ] 00:16:15.227 }' 00:16:15.227 18:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.487 18:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:15.487 18:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.487 18:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:15.487 18:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:15.487 18:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:15.487 18:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:15.487 18:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:15.487 18:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=436 00:16:15.487 18:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:15.487 18:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:15.487 18:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.487 18:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:15.487 18:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:15.487 18:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.487 18:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.487 18:14:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.487 18:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.487 18:14:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.487 18:14:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.487 18:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.487 "name": "raid_bdev1", 00:16:15.487 "uuid": "37f34395-5ecb-436f-ab94-e79ed1afe3c5", 00:16:15.487 "strip_size_kb": 0, 00:16:15.487 "state": "online", 00:16:15.487 "raid_level": "raid1", 00:16:15.487 "superblock": false, 00:16:15.487 "num_base_bdevs": 2, 00:16:15.487 "num_base_bdevs_discovered": 2, 00:16:15.487 "num_base_bdevs_operational": 2, 00:16:15.487 "process": { 00:16:15.487 "type": "rebuild", 00:16:15.487 "target": "spare", 00:16:15.487 "progress": { 00:16:15.487 "blocks": 16384, 00:16:15.487 "percent": 25 00:16:15.487 } 00:16:15.487 }, 00:16:15.487 "base_bdevs_list": [ 00:16:15.487 { 00:16:15.487 "name": "spare", 00:16:15.487 "uuid": "08bdc3f8-c520-5885-930d-0724efa2522c", 00:16:15.487 "is_configured": true, 00:16:15.487 "data_offset": 0, 00:16:15.487 "data_size": 65536 00:16:15.487 }, 00:16:15.487 { 00:16:15.487 "name": "BaseBdev2", 00:16:15.487 "uuid": "485128a7-507d-522e-b60a-95598faa8157", 00:16:15.487 "is_configured": true, 00:16:15.487 "data_offset": 0, 00:16:15.487 "data_size": 65536 00:16:15.487 } 00:16:15.487 ] 00:16:15.487 }' 00:16:15.487 18:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.487 18:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:15.487 18:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.744 18:14:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:15.744 18:14:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:15.744 [2024-12-06 18:14:41.120637] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:16:16.002 129.75 IOPS, 389.25 MiB/s [2024-12-06T18:14:41.522Z] [2024-12-06 18:14:41.385893] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:16:16.568 [2024-12-06 18:14:41.918754] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:16:16.568 18:14:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:16.568 18:14:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:16.568 18:14:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.568 18:14:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:16.568 18:14:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:16.568 18:14:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.569 18:14:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.569 18:14:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.569 18:14:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:16.569 18:14:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.569 18:14:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.569 18:14:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.569 "name": "raid_bdev1", 00:16:16.569 "uuid": "37f34395-5ecb-436f-ab94-e79ed1afe3c5", 00:16:16.569 "strip_size_kb": 0, 00:16:16.569 "state": "online", 00:16:16.569 "raid_level": "raid1", 00:16:16.569 "superblock": false, 00:16:16.569 "num_base_bdevs": 2, 00:16:16.569 "num_base_bdevs_discovered": 2, 00:16:16.569 "num_base_bdevs_operational": 2, 00:16:16.569 "process": { 00:16:16.569 "type": "rebuild", 00:16:16.569 "target": "spare", 00:16:16.569 "progress": { 00:16:16.569 "blocks": 36864, 00:16:16.569 "percent": 56 00:16:16.569 } 00:16:16.569 }, 00:16:16.569 "base_bdevs_list": [ 00:16:16.569 { 00:16:16.569 "name": "spare", 00:16:16.569 "uuid": "08bdc3f8-c520-5885-930d-0724efa2522c", 00:16:16.569 "is_configured": true, 00:16:16.569 "data_offset": 0, 00:16:16.569 "data_size": 65536 00:16:16.569 }, 00:16:16.569 { 00:16:16.569 "name": "BaseBdev2", 00:16:16.569 "uuid": "485128a7-507d-522e-b60a-95598faa8157", 00:16:16.569 "is_configured": true, 00:16:16.569 "data_offset": 0, 00:16:16.569 "data_size": 65536 00:16:16.569 } 00:16:16.569 ] 00:16:16.569 }' 00:16:16.569 18:14:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.827 18:14:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:16.827 18:14:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:16.827 18:14:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:16.827 18:14:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:17.085 114.60 IOPS, 343.80 MiB/s [2024-12-06T18:14:42.605Z] [2024-12-06 18:14:42.589577] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:16:17.705 [2024-12-06 18:14:42.951160] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:16:17.705 18:14:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:17.705 18:14:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:17.705 18:14:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:17.705 18:14:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:17.705 18:14:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:17.705 18:14:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:17.705 18:14:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.705 18:14:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.705 18:14:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:17.705 18:14:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.988 18:14:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.988 103.17 IOPS, 309.50 MiB/s [2024-12-06T18:14:43.508Z] 18:14:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:17.988 "name": "raid_bdev1", 00:16:17.988 "uuid": "37f34395-5ecb-436f-ab94-e79ed1afe3c5", 00:16:17.988 "strip_size_kb": 0, 00:16:17.988 "state": "online", 00:16:17.988 "raid_level": "raid1", 00:16:17.988 "superblock": false, 00:16:17.988 "num_base_bdevs": 2, 00:16:17.988 "num_base_bdevs_discovered": 2, 00:16:17.988 "num_base_bdevs_operational": 2, 00:16:17.988 "process": { 00:16:17.988 "type": "rebuild", 00:16:17.988 "target": "spare", 00:16:17.988 "progress": { 00:16:17.988 "blocks": 55296, 00:16:17.988 "percent": 84 00:16:17.988 } 00:16:17.988 }, 00:16:17.988 "base_bdevs_list": [ 00:16:17.988 { 00:16:17.988 "name": "spare", 00:16:17.988 "uuid": "08bdc3f8-c520-5885-930d-0724efa2522c", 00:16:17.988 "is_configured": true, 00:16:17.988 "data_offset": 0, 00:16:17.988 "data_size": 65536 00:16:17.988 }, 00:16:17.988 { 00:16:17.988 "name": "BaseBdev2", 00:16:17.988 "uuid": "485128a7-507d-522e-b60a-95598faa8157", 00:16:17.988 "is_configured": true, 00:16:17.988 "data_offset": 0, 00:16:17.988 "data_size": 65536 00:16:17.988 } 00:16:17.988 ] 00:16:17.988 }' 00:16:17.988 18:14:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:17.988 18:14:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:17.988 18:14:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:17.988 [2024-12-06 18:14:43.294944] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:16:17.988 18:14:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:17.988 18:14:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:18.247 [2024-12-06 18:14:43.514436] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:16:18.505 [2024-12-06 18:14:43.963837] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:18.764 [2024-12-06 18:14:44.071816] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:18.764 [2024-12-06 18:14:44.074866] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.024 92.14 IOPS, 276.43 MiB/s [2024-12-06T18:14:44.544Z] 18:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:19.024 18:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:19.024 18:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:19.024 18:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:19.024 18:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:19.024 18:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:19.024 18:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.024 18:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.024 18:14:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.024 18:14:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:19.024 18:14:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.024 18:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:19.024 "name": "raid_bdev1", 00:16:19.024 "uuid": "37f34395-5ecb-436f-ab94-e79ed1afe3c5", 00:16:19.024 "strip_size_kb": 0, 00:16:19.024 "state": "online", 00:16:19.024 "raid_level": "raid1", 00:16:19.024 "superblock": false, 00:16:19.024 "num_base_bdevs": 2, 00:16:19.024 "num_base_bdevs_discovered": 2, 00:16:19.024 "num_base_bdevs_operational": 2, 00:16:19.024 "base_bdevs_list": [ 00:16:19.024 { 00:16:19.024 "name": "spare", 00:16:19.024 "uuid": "08bdc3f8-c520-5885-930d-0724efa2522c", 00:16:19.024 "is_configured": true, 00:16:19.024 "data_offset": 0, 00:16:19.024 "data_size": 65536 00:16:19.024 }, 00:16:19.024 { 00:16:19.024 "name": "BaseBdev2", 00:16:19.024 "uuid": "485128a7-507d-522e-b60a-95598faa8157", 00:16:19.024 "is_configured": true, 00:16:19.024 "data_offset": 0, 00:16:19.024 "data_size": 65536 00:16:19.024 } 00:16:19.024 ] 00:16:19.024 }' 00:16:19.024 18:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:19.024 18:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:19.024 18:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:19.024 18:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:19.024 18:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:16:19.024 18:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:19.024 18:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:19.024 18:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:19.024 18:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:19.024 18:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:19.024 18:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.024 18:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.024 18:14:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.024 18:14:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:19.024 18:14:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.284 18:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:19.284 "name": "raid_bdev1", 00:16:19.284 "uuid": "37f34395-5ecb-436f-ab94-e79ed1afe3c5", 00:16:19.284 "strip_size_kb": 0, 00:16:19.284 "state": "online", 00:16:19.284 "raid_level": "raid1", 00:16:19.284 "superblock": false, 00:16:19.284 "num_base_bdevs": 2, 00:16:19.284 "num_base_bdevs_discovered": 2, 00:16:19.284 "num_base_bdevs_operational": 2, 00:16:19.284 "base_bdevs_list": [ 00:16:19.284 { 00:16:19.284 "name": "spare", 00:16:19.284 "uuid": "08bdc3f8-c520-5885-930d-0724efa2522c", 00:16:19.284 "is_configured": true, 00:16:19.284 "data_offset": 0, 00:16:19.284 "data_size": 65536 00:16:19.284 }, 00:16:19.284 { 00:16:19.284 "name": "BaseBdev2", 00:16:19.284 "uuid": "485128a7-507d-522e-b60a-95598faa8157", 00:16:19.284 "is_configured": true, 00:16:19.284 "data_offset": 0, 00:16:19.284 "data_size": 65536 00:16:19.284 } 00:16:19.284 ] 00:16:19.284 }' 00:16:19.284 18:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:19.284 18:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:19.284 18:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:19.284 18:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:19.284 18:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:19.284 18:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:19.284 18:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:19.284 18:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:19.284 18:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:19.284 18:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:19.284 18:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.284 18:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.284 18:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.284 18:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.284 18:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.284 18:14:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.284 18:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.284 18:14:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:19.284 18:14:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.284 18:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.284 "name": "raid_bdev1", 00:16:19.284 "uuid": "37f34395-5ecb-436f-ab94-e79ed1afe3c5", 00:16:19.284 "strip_size_kb": 0, 00:16:19.284 "state": "online", 00:16:19.284 "raid_level": "raid1", 00:16:19.284 "superblock": false, 00:16:19.284 "num_base_bdevs": 2, 00:16:19.284 "num_base_bdevs_discovered": 2, 00:16:19.284 "num_base_bdevs_operational": 2, 00:16:19.284 "base_bdevs_list": [ 00:16:19.284 { 00:16:19.284 "name": "spare", 00:16:19.284 "uuid": "08bdc3f8-c520-5885-930d-0724efa2522c", 00:16:19.284 "is_configured": true, 00:16:19.284 "data_offset": 0, 00:16:19.284 "data_size": 65536 00:16:19.284 }, 00:16:19.284 { 00:16:19.284 "name": "BaseBdev2", 00:16:19.284 "uuid": "485128a7-507d-522e-b60a-95598faa8157", 00:16:19.284 "is_configured": true, 00:16:19.284 "data_offset": 0, 00:16:19.284 "data_size": 65536 00:16:19.284 } 00:16:19.284 ] 00:16:19.284 }' 00:16:19.284 18:14:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.284 18:14:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:19.852 18:14:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:19.852 18:14:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.852 18:14:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:19.852 [2024-12-06 18:14:45.193188] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:19.852 [2024-12-06 18:14:45.193350] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:19.852 85.25 IOPS, 255.75 MiB/s 00:16:19.852 Latency(us) 00:16:19.852 [2024-12-06T18:14:45.372Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:19.852 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:16:19.852 raid_bdev1 : 8.08 84.69 254.07 0.00 0.00 16390.83 286.72 119632.99 00:16:19.852 [2024-12-06T18:14:45.372Z] =================================================================================================================== 00:16:19.852 [2024-12-06T18:14:45.372Z] Total : 84.69 254.07 0.00 0.00 16390.83 286.72 119632.99 00:16:19.852 { 00:16:19.852 "results": [ 00:16:19.852 { 00:16:19.852 "job": "raid_bdev1", 00:16:19.852 "core_mask": "0x1", 00:16:19.852 "workload": "randrw", 00:16:19.852 "percentage": 50, 00:16:19.852 "status": "finished", 00:16:19.852 "queue_depth": 2, 00:16:19.852 "io_size": 3145728, 00:16:19.852 "runtime": 8.076541, 00:16:19.852 "iops": 84.68972051277892, 00:16:19.852 "mibps": 254.06916153833674, 00:16:19.852 "io_failed": 0, 00:16:19.852 "io_timeout": 0, 00:16:19.852 "avg_latency_us": 16390.82938862307, 00:16:19.852 "min_latency_us": 286.72, 00:16:19.852 "max_latency_us": 119632.98909090909 00:16:19.852 } 00:16:19.852 ], 00:16:19.852 "core_count": 1 00:16:19.852 } 00:16:19.852 [2024-12-06 18:14:45.304886] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:19.852 [2024-12-06 18:14:45.304967] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.852 [2024-12-06 18:14:45.305073] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:19.852 [2024-12-06 18:14:45.305092] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:19.852 18:14:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.852 18:14:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.852 18:14:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:16:19.852 18:14:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.852 18:14:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:19.852 18:14:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.852 18:14:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:19.852 18:14:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:19.852 18:14:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:16:19.852 18:14:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:16:19.852 18:14:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:19.852 18:14:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:16:19.852 18:14:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:19.852 18:14:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:19.852 18:14:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:19.852 18:14:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:19.852 18:14:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:19.852 18:14:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:19.852 18:14:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:16:20.419 /dev/nbd0 00:16:20.419 18:14:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:20.419 18:14:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:20.419 18:14:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:20.419 18:14:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:16:20.419 18:14:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:20.419 18:14:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:20.419 18:14:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:20.419 18:14:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:16:20.419 18:14:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:20.419 18:14:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:20.419 18:14:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:20.419 1+0 records in 00:16:20.419 1+0 records out 00:16:20.419 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000383542 s, 10.7 MB/s 00:16:20.419 18:14:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:20.419 18:14:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:16:20.419 18:14:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:20.419 18:14:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:20.419 18:14:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:16:20.419 18:14:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:20.419 18:14:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:20.419 18:14:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:20.419 18:14:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:16:20.419 18:14:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:16:20.419 18:14:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:20.419 18:14:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:16:20.419 18:14:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:20.419 18:14:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:20.419 18:14:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:20.420 18:14:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:20.420 18:14:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:20.420 18:14:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:20.420 18:14:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:16:20.678 /dev/nbd1 00:16:20.678 18:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:20.678 18:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:20.678 18:14:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:20.678 18:14:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:16:20.678 18:14:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:20.678 18:14:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:20.678 18:14:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:20.678 18:14:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:16:20.678 18:14:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:20.678 18:14:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:20.678 18:14:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:20.678 1+0 records in 00:16:20.678 1+0 records out 00:16:20.678 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000402659 s, 10.2 MB/s 00:16:20.678 18:14:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:20.678 18:14:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:16:20.678 18:14:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:20.678 18:14:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:20.678 18:14:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:16:20.678 18:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:20.678 18:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:20.678 18:14:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:20.936 18:14:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:20.936 18:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:20.936 18:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:20.936 18:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:20.936 18:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:20.936 18:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:20.936 18:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:21.194 18:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:21.194 18:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:21.194 18:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:21.194 18:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:21.194 18:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:21.194 18:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:21.194 18:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:21.194 18:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:21.194 18:14:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:21.194 18:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:21.194 18:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:21.194 18:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:21.194 18:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:21.194 18:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:21.194 18:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:21.453 18:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:21.453 18:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:21.453 18:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:21.453 18:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:21.453 18:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:21.453 18:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:21.453 18:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:21.453 18:14:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:21.453 18:14:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:21.453 18:14:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76744 00:16:21.453 18:14:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76744 ']' 00:16:21.453 18:14:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76744 00:16:21.453 18:14:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:16:21.453 18:14:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:21.453 18:14:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76744 00:16:21.453 killing process with pid 76744 00:16:21.453 Received shutdown signal, test time was about 9.696400 seconds 00:16:21.453 00:16:21.453 Latency(us) 00:16:21.453 [2024-12-06T18:14:46.973Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:21.453 [2024-12-06T18:14:46.973Z] =================================================================================================================== 00:16:21.453 [2024-12-06T18:14:46.973Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:21.453 18:14:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:21.453 18:14:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:21.453 18:14:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76744' 00:16:21.453 18:14:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76744 00:16:21.453 [2024-12-06 18:14:46.904952] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:21.453 18:14:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76744 00:16:21.712 [2024-12-06 18:14:47.141478] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:23.087 18:14:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:16:23.087 00:16:23.087 real 0m13.120s 00:16:23.087 user 0m17.312s 00:16:23.087 sys 0m1.390s 00:16:23.087 18:14:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:23.087 ************************************ 00:16:23.087 END TEST raid_rebuild_test_io 00:16:23.087 ************************************ 00:16:23.087 18:14:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:23.087 18:14:48 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:16:23.087 18:14:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:23.087 18:14:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:23.087 18:14:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:23.087 ************************************ 00:16:23.087 START TEST raid_rebuild_test_sb_io 00:16:23.087 ************************************ 00:16:23.087 18:14:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:16:23.087 18:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:23.087 18:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:23.087 18:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:23.087 18:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:16:23.087 18:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:23.087 18:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:23.087 18:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:23.087 18:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:23.087 18:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:23.087 18:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:23.088 18:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:23.088 18:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:23.088 18:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:23.088 18:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:23.088 18:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:23.088 18:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:23.088 18:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:23.088 18:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:23.088 18:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:23.088 18:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:23.088 18:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:23.088 18:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:23.088 18:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:23.088 18:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:23.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.088 18:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77126 00:16:23.088 18:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77126 00:16:23.088 18:14:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 77126 ']' 00:16:23.088 18:14:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.088 18:14:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:23.088 18:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:23.088 18:14:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.088 18:14:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:23.088 18:14:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:23.088 [2024-12-06 18:14:48.458070] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:16:23.088 [2024-12-06 18:14:48.458469] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77126 ] 00:16:23.088 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:23.088 Zero copy mechanism will not be used. 00:16:23.346 [2024-12-06 18:14:48.653173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.346 [2024-12-06 18:14:48.806207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.605 [2024-12-06 18:14:49.009522] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:23.605 [2024-12-06 18:14:49.009593] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:24.172 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:24.172 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:16:24.172 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:24.172 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:24.172 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.172 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.172 BaseBdev1_malloc 00:16:24.172 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.172 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:24.172 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.172 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.172 [2024-12-06 18:14:49.537109] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:24.172 [2024-12-06 18:14:49.537193] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.172 [2024-12-06 18:14:49.537228] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:24.172 [2024-12-06 18:14:49.537246] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.172 [2024-12-06 18:14:49.540257] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.172 [2024-12-06 18:14:49.540305] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:24.172 BaseBdev1 00:16:24.172 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.172 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:24.172 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:24.172 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.172 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.172 BaseBdev2_malloc 00:16:24.172 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.172 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:24.172 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.172 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.172 [2024-12-06 18:14:49.593366] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:24.172 [2024-12-06 18:14:49.593441] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.172 [2024-12-06 18:14:49.593474] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:24.172 [2024-12-06 18:14:49.593492] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.172 [2024-12-06 18:14:49.596262] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.172 [2024-12-06 18:14:49.596311] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:24.172 BaseBdev2 00:16:24.172 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.172 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:24.172 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.172 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.172 spare_malloc 00:16:24.172 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.172 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:24.172 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.172 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.172 spare_delay 00:16:24.172 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.172 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:24.172 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.172 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.172 [2024-12-06 18:14:49.665797] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:24.172 [2024-12-06 18:14:49.665871] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.172 [2024-12-06 18:14:49.665900] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:24.172 [2024-12-06 18:14:49.665918] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.172 [2024-12-06 18:14:49.668738] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.172 [2024-12-06 18:14:49.668812] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:24.172 spare 00:16:24.172 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.172 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:24.172 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.172 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.172 [2024-12-06 18:14:49.673856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:24.172 [2024-12-06 18:14:49.676281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:24.172 [2024-12-06 18:14:49.676508] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:24.172 [2024-12-06 18:14:49.676532] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:24.172 [2024-12-06 18:14:49.676863] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:24.172 [2024-12-06 18:14:49.677111] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:24.172 [2024-12-06 18:14:49.677136] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:24.172 [2024-12-06 18:14:49.677337] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:24.172 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.172 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:24.172 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:24.172 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:24.172 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:24.172 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:24.172 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:24.172 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.172 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.172 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.172 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.172 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.172 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.172 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.172 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.431 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.431 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.431 "name": "raid_bdev1", 00:16:24.431 "uuid": "9448ce15-c66e-4ae6-867b-a7ba35de0875", 00:16:24.431 "strip_size_kb": 0, 00:16:24.431 "state": "online", 00:16:24.431 "raid_level": "raid1", 00:16:24.431 "superblock": true, 00:16:24.431 "num_base_bdevs": 2, 00:16:24.431 "num_base_bdevs_discovered": 2, 00:16:24.431 "num_base_bdevs_operational": 2, 00:16:24.431 "base_bdevs_list": [ 00:16:24.431 { 00:16:24.431 "name": "BaseBdev1", 00:16:24.431 "uuid": "f1db6349-9f47-50f6-96d6-337bec69ff1c", 00:16:24.431 "is_configured": true, 00:16:24.431 "data_offset": 2048, 00:16:24.431 "data_size": 63488 00:16:24.431 }, 00:16:24.431 { 00:16:24.431 "name": "BaseBdev2", 00:16:24.431 "uuid": "f25fadd6-3694-577f-89b7-102e44029bdd", 00:16:24.431 "is_configured": true, 00:16:24.431 "data_offset": 2048, 00:16:24.431 "data_size": 63488 00:16:24.431 } 00:16:24.431 ] 00:16:24.431 }' 00:16:24.431 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.431 18:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.998 18:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:24.998 18:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:24.998 18:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.998 18:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.998 [2024-12-06 18:14:50.254370] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:24.998 18:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.998 18:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:16:24.998 18:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.998 18:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:24.998 18:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.998 18:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.998 18:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.998 18:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:24.998 18:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:16:24.998 18:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:24.998 18:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:24.998 18:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.998 18:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.998 [2024-12-06 18:14:50.358025] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:24.998 18:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.998 18:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:24.998 18:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:24.998 18:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:24.998 18:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:24.998 18:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:24.998 18:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:24.998 18:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.998 18:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.998 18:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.998 18:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.998 18:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.998 18:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.998 18:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.998 18:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.998 18:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.998 18:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.998 "name": "raid_bdev1", 00:16:24.998 "uuid": "9448ce15-c66e-4ae6-867b-a7ba35de0875", 00:16:24.998 "strip_size_kb": 0, 00:16:24.998 "state": "online", 00:16:24.998 "raid_level": "raid1", 00:16:24.998 "superblock": true, 00:16:24.998 "num_base_bdevs": 2, 00:16:24.998 "num_base_bdevs_discovered": 1, 00:16:24.998 "num_base_bdevs_operational": 1, 00:16:24.998 "base_bdevs_list": [ 00:16:24.998 { 00:16:24.998 "name": null, 00:16:24.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.998 "is_configured": false, 00:16:24.998 "data_offset": 0, 00:16:24.998 "data_size": 63488 00:16:24.998 }, 00:16:24.998 { 00:16:24.998 "name": "BaseBdev2", 00:16:24.998 "uuid": "f25fadd6-3694-577f-89b7-102e44029bdd", 00:16:24.998 "is_configured": true, 00:16:24.998 "data_offset": 2048, 00:16:24.998 "data_size": 63488 00:16:24.998 } 00:16:24.998 ] 00:16:24.998 }' 00:16:24.998 18:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.998 18:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.998 [2024-12-06 18:14:50.490647] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:24.998 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:24.998 Zero copy mechanism will not be used. 00:16:24.998 Running I/O for 60 seconds... 00:16:25.565 18:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:25.565 18:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.565 18:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.565 [2024-12-06 18:14:50.925326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:25.565 18:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.565 18:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:25.565 [2024-12-06 18:14:50.999745] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:25.565 [2024-12-06 18:14:51.002904] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:25.822 [2024-12-06 18:14:51.120741] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:25.822 [2024-12-06 18:14:51.121435] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:25.822 [2024-12-06 18:14:51.325746] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:25.822 [2024-12-06 18:14:51.326427] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:26.080 171.00 IOPS, 513.00 MiB/s [2024-12-06T18:14:51.600Z] [2024-12-06 18:14:51.574926] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:26.339 [2024-12-06 18:14:51.789416] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:26.339 [2024-12-06 18:14:51.790123] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:26.599 18:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:26.599 18:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:26.599 18:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:26.599 18:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:26.599 18:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:26.599 18:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.599 18:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.599 18:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:26.599 18:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.599 18:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.599 18:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:26.599 "name": "raid_bdev1", 00:16:26.599 "uuid": "9448ce15-c66e-4ae6-867b-a7ba35de0875", 00:16:26.599 "strip_size_kb": 0, 00:16:26.599 "state": "online", 00:16:26.599 "raid_level": "raid1", 00:16:26.599 "superblock": true, 00:16:26.599 "num_base_bdevs": 2, 00:16:26.599 "num_base_bdevs_discovered": 2, 00:16:26.599 "num_base_bdevs_operational": 2, 00:16:26.599 "process": { 00:16:26.599 "type": "rebuild", 00:16:26.599 "target": "spare", 00:16:26.599 "progress": { 00:16:26.599 "blocks": 10240, 00:16:26.599 "percent": 16 00:16:26.599 } 00:16:26.599 }, 00:16:26.599 "base_bdevs_list": [ 00:16:26.599 { 00:16:26.599 "name": "spare", 00:16:26.599 "uuid": "63c62394-d009-559d-8e85-6bd93567990a", 00:16:26.599 "is_configured": true, 00:16:26.599 "data_offset": 2048, 00:16:26.599 "data_size": 63488 00:16:26.599 }, 00:16:26.599 { 00:16:26.599 "name": "BaseBdev2", 00:16:26.599 "uuid": "f25fadd6-3694-577f-89b7-102e44029bdd", 00:16:26.599 "is_configured": true, 00:16:26.599 "data_offset": 2048, 00:16:26.599 "data_size": 63488 00:16:26.599 } 00:16:26.599 ] 00:16:26.599 }' 00:16:26.599 18:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:26.599 18:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:26.599 18:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:26.858 [2024-12-06 18:14:52.130748] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:26.858 18:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:26.858 18:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:26.858 18:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.858 18:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:26.858 [2024-12-06 18:14:52.148581] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:26.858 [2024-12-06 18:14:52.148699] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:26.858 [2024-12-06 18:14:52.249684] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:26.858 [2024-12-06 18:14:52.253009] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:26.859 [2024-12-06 18:14:52.253072] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:26.859 [2024-12-06 18:14:52.253087] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:26.859 [2024-12-06 18:14:52.307122] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:16:26.859 18:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.859 18:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:26.859 18:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:26.859 18:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:26.859 18:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:26.859 18:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:26.859 18:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:26.859 18:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.859 18:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.859 18:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.859 18:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.859 18:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.859 18:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.859 18:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.859 18:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:26.859 18:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.118 18:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.118 "name": "raid_bdev1", 00:16:27.118 "uuid": "9448ce15-c66e-4ae6-867b-a7ba35de0875", 00:16:27.118 "strip_size_kb": 0, 00:16:27.118 "state": "online", 00:16:27.118 "raid_level": "raid1", 00:16:27.118 "superblock": true, 00:16:27.118 "num_base_bdevs": 2, 00:16:27.118 "num_base_bdevs_discovered": 1, 00:16:27.118 "num_base_bdevs_operational": 1, 00:16:27.118 "base_bdevs_list": [ 00:16:27.118 { 00:16:27.118 "name": null, 00:16:27.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.118 "is_configured": false, 00:16:27.118 "data_offset": 0, 00:16:27.118 "data_size": 63488 00:16:27.118 }, 00:16:27.118 { 00:16:27.118 "name": "BaseBdev2", 00:16:27.118 "uuid": "f25fadd6-3694-577f-89b7-102e44029bdd", 00:16:27.118 "is_configured": true, 00:16:27.118 "data_offset": 2048, 00:16:27.118 "data_size": 63488 00:16:27.118 } 00:16:27.118 ] 00:16:27.118 }' 00:16:27.118 18:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.118 18:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:27.376 142.50 IOPS, 427.50 MiB/s [2024-12-06T18:14:52.896Z] 18:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:27.376 18:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:27.376 18:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:27.376 18:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:27.376 18:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:27.376 18:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.376 18:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.376 18:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:27.376 18:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.376 18:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.634 18:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:27.634 "name": "raid_bdev1", 00:16:27.634 "uuid": "9448ce15-c66e-4ae6-867b-a7ba35de0875", 00:16:27.634 "strip_size_kb": 0, 00:16:27.634 "state": "online", 00:16:27.634 "raid_level": "raid1", 00:16:27.634 "superblock": true, 00:16:27.634 "num_base_bdevs": 2, 00:16:27.634 "num_base_bdevs_discovered": 1, 00:16:27.634 "num_base_bdevs_operational": 1, 00:16:27.634 "base_bdevs_list": [ 00:16:27.634 { 00:16:27.634 "name": null, 00:16:27.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.634 "is_configured": false, 00:16:27.634 "data_offset": 0, 00:16:27.634 "data_size": 63488 00:16:27.634 }, 00:16:27.634 { 00:16:27.634 "name": "BaseBdev2", 00:16:27.634 "uuid": "f25fadd6-3694-577f-89b7-102e44029bdd", 00:16:27.634 "is_configured": true, 00:16:27.634 "data_offset": 2048, 00:16:27.634 "data_size": 63488 00:16:27.634 } 00:16:27.634 ] 00:16:27.634 }' 00:16:27.634 18:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:27.634 18:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:27.634 18:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:27.634 18:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:27.634 18:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:27.634 18:14:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.634 18:14:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:27.634 [2024-12-06 18:14:53.026625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:27.634 18:14:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.634 18:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:27.634 [2024-12-06 18:14:53.106803] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:27.634 [2024-12-06 18:14:53.109401] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:27.892 [2024-12-06 18:14:53.219736] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:27.892 [2024-12-06 18:14:53.220375] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:28.180 [2024-12-06 18:14:53.441400] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:28.180 [2024-12-06 18:14:53.441837] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:28.438 148.33 IOPS, 445.00 MiB/s [2024-12-06T18:14:53.958Z] [2024-12-06 18:14:53.806423] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:28.697 [2024-12-06 18:14:54.027894] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:28.697 [2024-12-06 18:14:54.028276] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:28.697 18:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:28.697 18:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:28.697 18:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:28.697 18:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:28.697 18:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:28.697 18:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.697 18:14:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.697 18:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.697 18:14:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:28.697 18:14:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.697 18:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:28.697 "name": "raid_bdev1", 00:16:28.697 "uuid": "9448ce15-c66e-4ae6-867b-a7ba35de0875", 00:16:28.697 "strip_size_kb": 0, 00:16:28.697 "state": "online", 00:16:28.697 "raid_level": "raid1", 00:16:28.697 "superblock": true, 00:16:28.697 "num_base_bdevs": 2, 00:16:28.697 "num_base_bdevs_discovered": 2, 00:16:28.697 "num_base_bdevs_operational": 2, 00:16:28.697 "process": { 00:16:28.697 "type": "rebuild", 00:16:28.697 "target": "spare", 00:16:28.697 "progress": { 00:16:28.697 "blocks": 10240, 00:16:28.697 "percent": 16 00:16:28.697 } 00:16:28.697 }, 00:16:28.697 "base_bdevs_list": [ 00:16:28.697 { 00:16:28.697 "name": "spare", 00:16:28.697 "uuid": "63c62394-d009-559d-8e85-6bd93567990a", 00:16:28.697 "is_configured": true, 00:16:28.697 "data_offset": 2048, 00:16:28.697 "data_size": 63488 00:16:28.697 }, 00:16:28.697 { 00:16:28.697 "name": "BaseBdev2", 00:16:28.697 "uuid": "f25fadd6-3694-577f-89b7-102e44029bdd", 00:16:28.697 "is_configured": true, 00:16:28.697 "data_offset": 2048, 00:16:28.697 "data_size": 63488 00:16:28.698 } 00:16:28.698 ] 00:16:28.698 }' 00:16:28.698 18:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:28.698 18:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:28.698 18:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:28.957 18:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:28.957 18:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:28.957 18:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:28.957 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:28.957 18:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:28.957 18:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:28.957 18:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:28.957 18:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=450 00:16:28.957 18:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:28.957 18:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:28.957 18:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:28.957 18:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:28.957 18:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:28.957 18:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:28.957 18:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.957 18:14:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.957 18:14:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:28.957 18:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.957 18:14:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.957 [2024-12-06 18:14:54.261349] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:28.957 18:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:28.957 "name": "raid_bdev1", 00:16:28.958 "uuid": "9448ce15-c66e-4ae6-867b-a7ba35de0875", 00:16:28.958 "strip_size_kb": 0, 00:16:28.958 "state": "online", 00:16:28.958 "raid_level": "raid1", 00:16:28.958 "superblock": true, 00:16:28.958 "num_base_bdevs": 2, 00:16:28.958 "num_base_bdevs_discovered": 2, 00:16:28.958 "num_base_bdevs_operational": 2, 00:16:28.958 "process": { 00:16:28.958 "type": "rebuild", 00:16:28.958 "target": "spare", 00:16:28.958 "progress": { 00:16:28.958 "blocks": 12288, 00:16:28.958 "percent": 19 00:16:28.958 } 00:16:28.958 }, 00:16:28.958 "base_bdevs_list": [ 00:16:28.958 { 00:16:28.958 "name": "spare", 00:16:28.958 "uuid": "63c62394-d009-559d-8e85-6bd93567990a", 00:16:28.958 "is_configured": true, 00:16:28.958 "data_offset": 2048, 00:16:28.958 "data_size": 63488 00:16:28.958 }, 00:16:28.958 { 00:16:28.958 "name": "BaseBdev2", 00:16:28.958 "uuid": "f25fadd6-3694-577f-89b7-102e44029bdd", 00:16:28.958 "is_configured": true, 00:16:28.958 "data_offset": 2048, 00:16:28.958 "data_size": 63488 00:16:28.958 } 00:16:28.958 ] 00:16:28.958 }' 00:16:28.958 18:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:28.958 18:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:28.958 18:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:28.958 [2024-12-06 18:14:54.381299] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:28.958 18:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:28.958 18:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:29.216 133.50 IOPS, 400.50 MiB/s [2024-12-06T18:14:54.736Z] [2024-12-06 18:14:54.726016] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:16:29.216 [2024-12-06 18:14:54.726365] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:16:29.791 [2024-12-06 18:14:55.057964] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:16:29.791 [2024-12-06 18:14:55.201389] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:16:29.791 [2024-12-06 18:14:55.201778] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:16:30.055 18:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:30.055 18:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:30.055 18:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:30.055 18:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:30.055 18:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:30.055 18:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:30.055 18:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.055 18:14:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.055 18:14:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:30.055 18:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.055 18:14:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.055 18:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:30.055 "name": "raid_bdev1", 00:16:30.055 "uuid": "9448ce15-c66e-4ae6-867b-a7ba35de0875", 00:16:30.055 "strip_size_kb": 0, 00:16:30.055 "state": "online", 00:16:30.055 "raid_level": "raid1", 00:16:30.055 "superblock": true, 00:16:30.055 "num_base_bdevs": 2, 00:16:30.055 "num_base_bdevs_discovered": 2, 00:16:30.055 "num_base_bdevs_operational": 2, 00:16:30.055 "process": { 00:16:30.055 "type": "rebuild", 00:16:30.055 "target": "spare", 00:16:30.055 "progress": { 00:16:30.055 "blocks": 28672, 00:16:30.055 "percent": 45 00:16:30.055 } 00:16:30.055 }, 00:16:30.055 "base_bdevs_list": [ 00:16:30.055 { 00:16:30.055 "name": "spare", 00:16:30.055 "uuid": "63c62394-d009-559d-8e85-6bd93567990a", 00:16:30.055 "is_configured": true, 00:16:30.055 "data_offset": 2048, 00:16:30.055 "data_size": 63488 00:16:30.055 }, 00:16:30.055 { 00:16:30.055 "name": "BaseBdev2", 00:16:30.055 "uuid": "f25fadd6-3694-577f-89b7-102e44029bdd", 00:16:30.055 "is_configured": true, 00:16:30.055 "data_offset": 2048, 00:16:30.055 "data_size": 63488 00:16:30.055 } 00:16:30.055 ] 00:16:30.055 }' 00:16:30.055 18:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:30.055 18:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:30.055 18:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:30.055 115.80 IOPS, 347.40 MiB/s [2024-12-06T18:14:55.575Z] [2024-12-06 18:14:55.542936] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:16:30.055 18:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:30.055 18:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:30.620 [2024-12-06 18:14:56.012663] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:16:30.877 [2024-12-06 18:14:56.227663] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:16:31.134 104.83 IOPS, 314.50 MiB/s [2024-12-06T18:14:56.654Z] 18:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:31.134 18:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:31.134 18:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.134 18:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:31.135 18:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:31.135 18:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.135 18:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.135 18:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.135 18:14:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.135 18:14:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:31.135 18:14:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.135 18:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.135 "name": "raid_bdev1", 00:16:31.135 "uuid": "9448ce15-c66e-4ae6-867b-a7ba35de0875", 00:16:31.135 "strip_size_kb": 0, 00:16:31.135 "state": "online", 00:16:31.135 "raid_level": "raid1", 00:16:31.135 "superblock": true, 00:16:31.135 "num_base_bdevs": 2, 00:16:31.135 "num_base_bdevs_discovered": 2, 00:16:31.135 "num_base_bdevs_operational": 2, 00:16:31.135 "process": { 00:16:31.135 "type": "rebuild", 00:16:31.135 "target": "spare", 00:16:31.135 "progress": { 00:16:31.135 "blocks": 47104, 00:16:31.135 "percent": 74 00:16:31.135 } 00:16:31.135 }, 00:16:31.135 "base_bdevs_list": [ 00:16:31.135 { 00:16:31.135 "name": "spare", 00:16:31.135 "uuid": "63c62394-d009-559d-8e85-6bd93567990a", 00:16:31.135 "is_configured": true, 00:16:31.135 "data_offset": 2048, 00:16:31.135 "data_size": 63488 00:16:31.135 }, 00:16:31.135 { 00:16:31.135 "name": "BaseBdev2", 00:16:31.135 "uuid": "f25fadd6-3694-577f-89b7-102e44029bdd", 00:16:31.135 "is_configured": true, 00:16:31.135 "data_offset": 2048, 00:16:31.135 "data_size": 63488 00:16:31.135 } 00:16:31.135 ] 00:16:31.135 }' 00:16:31.135 18:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.392 18:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:31.392 18:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:31.392 18:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:31.392 18:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:31.959 [2024-12-06 18:14:57.318594] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:31.959 [2024-12-06 18:14:57.338235] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:31.959 [2024-12-06 18:14:57.340626] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:32.217 95.71 IOPS, 287.14 MiB/s [2024-12-06T18:14:57.737Z] 18:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:32.217 18:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:32.217 18:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:32.217 18:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:32.217 18:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:32.217 18:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:32.217 18:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.217 18:14:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.217 18:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.217 18:14:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:32.217 18:14:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.475 18:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:32.475 "name": "raid_bdev1", 00:16:32.475 "uuid": "9448ce15-c66e-4ae6-867b-a7ba35de0875", 00:16:32.475 "strip_size_kb": 0, 00:16:32.475 "state": "online", 00:16:32.475 "raid_level": "raid1", 00:16:32.475 "superblock": true, 00:16:32.475 "num_base_bdevs": 2, 00:16:32.475 "num_base_bdevs_discovered": 2, 00:16:32.475 "num_base_bdevs_operational": 2, 00:16:32.475 "base_bdevs_list": [ 00:16:32.475 { 00:16:32.475 "name": "spare", 00:16:32.475 "uuid": "63c62394-d009-559d-8e85-6bd93567990a", 00:16:32.475 "is_configured": true, 00:16:32.475 "data_offset": 2048, 00:16:32.475 "data_size": 63488 00:16:32.475 }, 00:16:32.475 { 00:16:32.475 "name": "BaseBdev2", 00:16:32.475 "uuid": "f25fadd6-3694-577f-89b7-102e44029bdd", 00:16:32.475 "is_configured": true, 00:16:32.475 "data_offset": 2048, 00:16:32.475 "data_size": 63488 00:16:32.475 } 00:16:32.475 ] 00:16:32.475 }' 00:16:32.475 18:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:32.475 18:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:32.475 18:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:32.475 18:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:32.475 18:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:16:32.475 18:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:32.475 18:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:32.475 18:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:32.475 18:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:32.475 18:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:32.475 18:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.475 18:14:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.475 18:14:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:32.475 18:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.475 18:14:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.475 18:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:32.475 "name": "raid_bdev1", 00:16:32.475 "uuid": "9448ce15-c66e-4ae6-867b-a7ba35de0875", 00:16:32.475 "strip_size_kb": 0, 00:16:32.475 "state": "online", 00:16:32.475 "raid_level": "raid1", 00:16:32.475 "superblock": true, 00:16:32.475 "num_base_bdevs": 2, 00:16:32.475 "num_base_bdevs_discovered": 2, 00:16:32.475 "num_base_bdevs_operational": 2, 00:16:32.475 "base_bdevs_list": [ 00:16:32.475 { 00:16:32.475 "name": "spare", 00:16:32.475 "uuid": "63c62394-d009-559d-8e85-6bd93567990a", 00:16:32.475 "is_configured": true, 00:16:32.475 "data_offset": 2048, 00:16:32.475 "data_size": 63488 00:16:32.475 }, 00:16:32.475 { 00:16:32.475 "name": "BaseBdev2", 00:16:32.475 "uuid": "f25fadd6-3694-577f-89b7-102e44029bdd", 00:16:32.475 "is_configured": true, 00:16:32.475 "data_offset": 2048, 00:16:32.475 "data_size": 63488 00:16:32.475 } 00:16:32.475 ] 00:16:32.475 }' 00:16:32.475 18:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:32.475 18:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:32.475 18:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:32.734 18:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:32.734 18:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:32.734 18:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:32.734 18:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.734 18:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:32.734 18:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:32.734 18:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:32.734 18:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.734 18:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.734 18:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.734 18:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.734 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.734 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.734 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.734 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:32.734 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.734 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.734 "name": "raid_bdev1", 00:16:32.734 "uuid": "9448ce15-c66e-4ae6-867b-a7ba35de0875", 00:16:32.734 "strip_size_kb": 0, 00:16:32.734 "state": "online", 00:16:32.734 "raid_level": "raid1", 00:16:32.734 "superblock": true, 00:16:32.734 "num_base_bdevs": 2, 00:16:32.734 "num_base_bdevs_discovered": 2, 00:16:32.734 "num_base_bdevs_operational": 2, 00:16:32.734 "base_bdevs_list": [ 00:16:32.734 { 00:16:32.734 "name": "spare", 00:16:32.734 "uuid": "63c62394-d009-559d-8e85-6bd93567990a", 00:16:32.734 "is_configured": true, 00:16:32.734 "data_offset": 2048, 00:16:32.734 "data_size": 63488 00:16:32.734 }, 00:16:32.734 { 00:16:32.734 "name": "BaseBdev2", 00:16:32.734 "uuid": "f25fadd6-3694-577f-89b7-102e44029bdd", 00:16:32.734 "is_configured": true, 00:16:32.734 "data_offset": 2048, 00:16:32.734 "data_size": 63488 00:16:32.734 } 00:16:32.734 ] 00:16:32.734 }' 00:16:32.734 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.734 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:32.992 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:32.992 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.992 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:32.992 [2024-12-06 18:14:58.494880] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:32.992 [2024-12-06 18:14:58.494921] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:33.286 88.75 IOPS, 266.25 MiB/s 00:16:33.286 Latency(us) 00:16:33.286 [2024-12-06T18:14:58.806Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:33.286 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:16:33.286 raid_bdev1 : 8.09 88.06 264.17 0.00 0.00 14728.12 294.17 112006.98 00:16:33.286 [2024-12-06T18:14:58.806Z] =================================================================================================================== 00:16:33.286 [2024-12-06T18:14:58.806Z] Total : 88.06 264.17 0.00 0.00 14728.12 294.17 112006.98 00:16:33.286 [2024-12-06 18:14:58.598670] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:33.286 [2024-12-06 18:14:58.598754] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:33.286 [2024-12-06 18:14:58.598876] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:33.286 [2024-12-06 18:14:58.598894] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:33.286 { 00:16:33.286 "results": [ 00:16:33.286 { 00:16:33.286 "job": "raid_bdev1", 00:16:33.286 "core_mask": "0x1", 00:16:33.286 "workload": "randrw", 00:16:33.286 "percentage": 50, 00:16:33.286 "status": "finished", 00:16:33.286 "queue_depth": 2, 00:16:33.286 "io_size": 3145728, 00:16:33.286 "runtime": 8.085602, 00:16:33.286 "iops": 88.05775995405166, 00:16:33.286 "mibps": 264.17327986215497, 00:16:33.286 "io_failed": 0, 00:16:33.286 "io_timeout": 0, 00:16:33.286 "avg_latency_us": 14728.121920326865, 00:16:33.286 "min_latency_us": 294.16727272727275, 00:16:33.286 "max_latency_us": 112006.98181818181 00:16:33.286 } 00:16:33.286 ], 00:16:33.286 "core_count": 1 00:16:33.286 } 00:16:33.286 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.286 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.286 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:16:33.286 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.286 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:33.286 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.286 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:33.286 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:33.286 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:16:33.286 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:16:33.286 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:33.286 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:16:33.286 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:33.286 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:33.286 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:33.286 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:33.286 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:33.286 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:33.286 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:16:33.562 /dev/nbd0 00:16:33.562 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:33.562 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:33.562 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:33.562 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:16:33.562 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:33.562 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:33.562 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:33.562 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:16:33.562 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:33.562 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:33.562 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:33.562 1+0 records in 00:16:33.562 1+0 records out 00:16:33.562 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353147 s, 11.6 MB/s 00:16:33.562 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:33.562 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:16:33.562 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:33.562 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:33.562 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:16:33.562 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:33.562 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:33.562 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:33.562 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:16:33.562 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:16:33.562 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:33.562 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:16:33.562 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:33.562 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:33.562 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:33.562 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:33.562 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:33.562 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:33.562 18:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:16:33.820 /dev/nbd1 00:16:33.820 18:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:33.820 18:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:33.820 18:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:33.820 18:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:16:33.820 18:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:33.820 18:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:33.820 18:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:33.820 18:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:16:33.820 18:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:33.820 18:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:33.821 18:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:33.821 1+0 records in 00:16:33.821 1+0 records out 00:16:33.821 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345993 s, 11.8 MB/s 00:16:33.821 18:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:33.821 18:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:16:33.821 18:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:33.821 18:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:33.821 18:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:16:33.821 18:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:33.821 18:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:33.821 18:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:34.079 18:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:34.079 18:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:34.079 18:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:34.079 18:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:34.079 18:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:34.079 18:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:34.079 18:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:34.338 18:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:34.338 18:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:34.338 18:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:34.338 18:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:34.338 18:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:34.338 18:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:34.338 18:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:34.338 18:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:34.338 18:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:34.338 18:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:34.338 18:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:34.338 18:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:34.338 18:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:34.338 18:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:34.338 18:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:34.597 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:34.597 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:34.597 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:34.597 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:34.597 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:34.597 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:34.597 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:34.597 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:34.597 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:34.597 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:34.597 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.597 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:34.597 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.597 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:34.597 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.597 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:34.597 [2024-12-06 18:15:00.044804] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:34.597 [2024-12-06 18:15:00.044863] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:34.597 [2024-12-06 18:15:00.044898] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:16:34.597 [2024-12-06 18:15:00.044913] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:34.597 [2024-12-06 18:15:00.047883] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:34.597 [2024-12-06 18:15:00.047928] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:34.597 [2024-12-06 18:15:00.048040] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:34.597 [2024-12-06 18:15:00.048101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:34.597 [2024-12-06 18:15:00.048284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:34.597 spare 00:16:34.597 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.597 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:34.597 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.597 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:34.856 [2024-12-06 18:15:00.148433] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:34.856 [2024-12-06 18:15:00.148524] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:34.856 [2024-12-06 18:15:00.148991] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:16:34.856 [2024-12-06 18:15:00.149288] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:34.856 [2024-12-06 18:15:00.149315] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:34.856 [2024-12-06 18:15:00.149573] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:34.856 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.856 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:34.856 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:34.856 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:34.856 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:34.856 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:34.856 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:34.856 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.856 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.856 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.856 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.856 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.856 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.856 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.856 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:34.856 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.856 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.856 "name": "raid_bdev1", 00:16:34.856 "uuid": "9448ce15-c66e-4ae6-867b-a7ba35de0875", 00:16:34.856 "strip_size_kb": 0, 00:16:34.856 "state": "online", 00:16:34.856 "raid_level": "raid1", 00:16:34.856 "superblock": true, 00:16:34.856 "num_base_bdevs": 2, 00:16:34.856 "num_base_bdevs_discovered": 2, 00:16:34.856 "num_base_bdevs_operational": 2, 00:16:34.856 "base_bdevs_list": [ 00:16:34.856 { 00:16:34.856 "name": "spare", 00:16:34.856 "uuid": "63c62394-d009-559d-8e85-6bd93567990a", 00:16:34.856 "is_configured": true, 00:16:34.856 "data_offset": 2048, 00:16:34.856 "data_size": 63488 00:16:34.856 }, 00:16:34.856 { 00:16:34.856 "name": "BaseBdev2", 00:16:34.856 "uuid": "f25fadd6-3694-577f-89b7-102e44029bdd", 00:16:34.856 "is_configured": true, 00:16:34.856 "data_offset": 2048, 00:16:34.856 "data_size": 63488 00:16:34.856 } 00:16:34.856 ] 00:16:34.856 }' 00:16:34.856 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.856 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.424 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:35.424 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.424 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:35.424 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:35.424 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.424 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.424 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.424 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.424 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.424 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.424 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.424 "name": "raid_bdev1", 00:16:35.424 "uuid": "9448ce15-c66e-4ae6-867b-a7ba35de0875", 00:16:35.424 "strip_size_kb": 0, 00:16:35.424 "state": "online", 00:16:35.424 "raid_level": "raid1", 00:16:35.424 "superblock": true, 00:16:35.424 "num_base_bdevs": 2, 00:16:35.424 "num_base_bdevs_discovered": 2, 00:16:35.424 "num_base_bdevs_operational": 2, 00:16:35.424 "base_bdevs_list": [ 00:16:35.424 { 00:16:35.424 "name": "spare", 00:16:35.424 "uuid": "63c62394-d009-559d-8e85-6bd93567990a", 00:16:35.424 "is_configured": true, 00:16:35.424 "data_offset": 2048, 00:16:35.424 "data_size": 63488 00:16:35.424 }, 00:16:35.424 { 00:16:35.424 "name": "BaseBdev2", 00:16:35.424 "uuid": "f25fadd6-3694-577f-89b7-102e44029bdd", 00:16:35.424 "is_configured": true, 00:16:35.424 "data_offset": 2048, 00:16:35.424 "data_size": 63488 00:16:35.424 } 00:16:35.424 ] 00:16:35.424 }' 00:16:35.424 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.424 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:35.424 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.424 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:35.424 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:35.424 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.425 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.425 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.425 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.425 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:35.425 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:35.425 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.425 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.425 [2024-12-06 18:15:00.897895] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:35.425 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.425 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:35.425 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:35.425 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.425 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:35.425 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:35.425 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:35.425 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.425 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.425 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.425 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.425 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.425 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.425 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.425 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.425 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.683 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.683 "name": "raid_bdev1", 00:16:35.683 "uuid": "9448ce15-c66e-4ae6-867b-a7ba35de0875", 00:16:35.683 "strip_size_kb": 0, 00:16:35.683 "state": "online", 00:16:35.683 "raid_level": "raid1", 00:16:35.683 "superblock": true, 00:16:35.683 "num_base_bdevs": 2, 00:16:35.683 "num_base_bdevs_discovered": 1, 00:16:35.683 "num_base_bdevs_operational": 1, 00:16:35.683 "base_bdevs_list": [ 00:16:35.683 { 00:16:35.683 "name": null, 00:16:35.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.683 "is_configured": false, 00:16:35.683 "data_offset": 0, 00:16:35.683 "data_size": 63488 00:16:35.683 }, 00:16:35.683 { 00:16:35.683 "name": "BaseBdev2", 00:16:35.683 "uuid": "f25fadd6-3694-577f-89b7-102e44029bdd", 00:16:35.683 "is_configured": true, 00:16:35.683 "data_offset": 2048, 00:16:35.683 "data_size": 63488 00:16:35.683 } 00:16:35.683 ] 00:16:35.683 }' 00:16:35.683 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.683 18:15:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.941 18:15:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:35.941 18:15:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.941 18:15:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.941 [2024-12-06 18:15:01.434121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:35.941 [2024-12-06 18:15:01.434442] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:35.941 [2024-12-06 18:15:01.434469] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:35.941 [2024-12-06 18:15:01.434526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:35.942 [2024-12-06 18:15:01.450618] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:16:35.942 18:15:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.942 18:15:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:35.942 [2024-12-06 18:15:01.453352] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:37.317 18:15:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:37.317 18:15:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:37.317 18:15:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:37.317 18:15:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:37.317 18:15:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:37.317 18:15:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.317 18:15:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.317 18:15:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.317 18:15:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:37.317 18:15:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.317 18:15:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:37.317 "name": "raid_bdev1", 00:16:37.317 "uuid": "9448ce15-c66e-4ae6-867b-a7ba35de0875", 00:16:37.317 "strip_size_kb": 0, 00:16:37.317 "state": "online", 00:16:37.317 "raid_level": "raid1", 00:16:37.317 "superblock": true, 00:16:37.317 "num_base_bdevs": 2, 00:16:37.317 "num_base_bdevs_discovered": 2, 00:16:37.317 "num_base_bdevs_operational": 2, 00:16:37.317 "process": { 00:16:37.317 "type": "rebuild", 00:16:37.317 "target": "spare", 00:16:37.317 "progress": { 00:16:37.317 "blocks": 20480, 00:16:37.317 "percent": 32 00:16:37.317 } 00:16:37.317 }, 00:16:37.317 "base_bdevs_list": [ 00:16:37.317 { 00:16:37.317 "name": "spare", 00:16:37.317 "uuid": "63c62394-d009-559d-8e85-6bd93567990a", 00:16:37.317 "is_configured": true, 00:16:37.317 "data_offset": 2048, 00:16:37.317 "data_size": 63488 00:16:37.317 }, 00:16:37.317 { 00:16:37.317 "name": "BaseBdev2", 00:16:37.317 "uuid": "f25fadd6-3694-577f-89b7-102e44029bdd", 00:16:37.317 "is_configured": true, 00:16:37.317 "data_offset": 2048, 00:16:37.317 "data_size": 63488 00:16:37.317 } 00:16:37.317 ] 00:16:37.317 }' 00:16:37.317 18:15:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:37.317 18:15:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:37.317 18:15:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.317 18:15:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:37.317 18:15:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:37.317 18:15:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.317 18:15:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:37.317 [2024-12-06 18:15:02.623165] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:37.317 [2024-12-06 18:15:02.663076] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:37.317 [2024-12-06 18:15:02.663175] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:37.317 [2024-12-06 18:15:02.663199] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:37.317 [2024-12-06 18:15:02.663217] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:37.317 18:15:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.317 18:15:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:37.317 18:15:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:37.317 18:15:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:37.317 18:15:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:37.317 18:15:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:37.317 18:15:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:37.317 18:15:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.317 18:15:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.317 18:15:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.317 18:15:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.318 18:15:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.318 18:15:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.318 18:15:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:37.318 18:15:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.318 18:15:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.318 18:15:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.318 "name": "raid_bdev1", 00:16:37.318 "uuid": "9448ce15-c66e-4ae6-867b-a7ba35de0875", 00:16:37.318 "strip_size_kb": 0, 00:16:37.318 "state": "online", 00:16:37.318 "raid_level": "raid1", 00:16:37.318 "superblock": true, 00:16:37.318 "num_base_bdevs": 2, 00:16:37.318 "num_base_bdevs_discovered": 1, 00:16:37.318 "num_base_bdevs_operational": 1, 00:16:37.318 "base_bdevs_list": [ 00:16:37.318 { 00:16:37.318 "name": null, 00:16:37.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.318 "is_configured": false, 00:16:37.318 "data_offset": 0, 00:16:37.318 "data_size": 63488 00:16:37.318 }, 00:16:37.318 { 00:16:37.318 "name": "BaseBdev2", 00:16:37.318 "uuid": "f25fadd6-3694-577f-89b7-102e44029bdd", 00:16:37.318 "is_configured": true, 00:16:37.318 "data_offset": 2048, 00:16:37.318 "data_size": 63488 00:16:37.318 } 00:16:37.318 ] 00:16:37.318 }' 00:16:37.318 18:15:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.318 18:15:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:37.884 18:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:37.884 18:15:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.884 18:15:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:37.884 [2024-12-06 18:15:03.231546] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:37.884 [2024-12-06 18:15:03.231641] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.884 [2024-12-06 18:15:03.231672] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:37.884 [2024-12-06 18:15:03.231691] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.884 [2024-12-06 18:15:03.232321] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.884 [2024-12-06 18:15:03.232382] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:37.884 [2024-12-06 18:15:03.232521] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:37.884 [2024-12-06 18:15:03.232547] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:37.884 [2024-12-06 18:15:03.232561] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:37.884 [2024-12-06 18:15:03.232599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:37.884 [2024-12-06 18:15:03.249829] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:16:37.884 spare 00:16:37.884 18:15:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.884 18:15:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:37.884 [2024-12-06 18:15:03.252404] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:38.945 18:15:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:38.945 18:15:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:38.945 18:15:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:38.945 18:15:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:38.945 18:15:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:38.945 18:15:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.945 18:15:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.945 18:15:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.945 18:15:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:38.945 18:15:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.945 18:15:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:38.945 "name": "raid_bdev1", 00:16:38.945 "uuid": "9448ce15-c66e-4ae6-867b-a7ba35de0875", 00:16:38.945 "strip_size_kb": 0, 00:16:38.945 "state": "online", 00:16:38.945 "raid_level": "raid1", 00:16:38.945 "superblock": true, 00:16:38.945 "num_base_bdevs": 2, 00:16:38.945 "num_base_bdevs_discovered": 2, 00:16:38.945 "num_base_bdevs_operational": 2, 00:16:38.945 "process": { 00:16:38.945 "type": "rebuild", 00:16:38.945 "target": "spare", 00:16:38.945 "progress": { 00:16:38.945 "blocks": 20480, 00:16:38.945 "percent": 32 00:16:38.945 } 00:16:38.945 }, 00:16:38.945 "base_bdevs_list": [ 00:16:38.945 { 00:16:38.945 "name": "spare", 00:16:38.945 "uuid": "63c62394-d009-559d-8e85-6bd93567990a", 00:16:38.945 "is_configured": true, 00:16:38.945 "data_offset": 2048, 00:16:38.945 "data_size": 63488 00:16:38.945 }, 00:16:38.945 { 00:16:38.945 "name": "BaseBdev2", 00:16:38.945 "uuid": "f25fadd6-3694-577f-89b7-102e44029bdd", 00:16:38.945 "is_configured": true, 00:16:38.945 "data_offset": 2048, 00:16:38.945 "data_size": 63488 00:16:38.945 } 00:16:38.945 ] 00:16:38.945 }' 00:16:38.945 18:15:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:38.945 18:15:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:38.945 18:15:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:38.945 18:15:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:38.945 18:15:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:38.945 18:15:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.946 18:15:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:38.946 [2024-12-06 18:15:04.426069] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:38.946 [2024-12-06 18:15:04.461819] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:38.946 [2024-12-06 18:15:04.462074] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.946 [2024-12-06 18:15:04.462351] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:38.946 [2024-12-06 18:15:04.462404] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:39.205 18:15:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.205 18:15:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:39.205 18:15:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:39.205 18:15:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:39.205 18:15:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:39.205 18:15:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:39.205 18:15:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:39.205 18:15:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.205 18:15:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.205 18:15:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.205 18:15:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.205 18:15:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.205 18:15:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.205 18:15:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.205 18:15:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:39.205 18:15:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.205 18:15:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.205 "name": "raid_bdev1", 00:16:39.205 "uuid": "9448ce15-c66e-4ae6-867b-a7ba35de0875", 00:16:39.205 "strip_size_kb": 0, 00:16:39.205 "state": "online", 00:16:39.205 "raid_level": "raid1", 00:16:39.205 "superblock": true, 00:16:39.205 "num_base_bdevs": 2, 00:16:39.205 "num_base_bdevs_discovered": 1, 00:16:39.205 "num_base_bdevs_operational": 1, 00:16:39.205 "base_bdevs_list": [ 00:16:39.205 { 00:16:39.205 "name": null, 00:16:39.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.205 "is_configured": false, 00:16:39.205 "data_offset": 0, 00:16:39.205 "data_size": 63488 00:16:39.205 }, 00:16:39.205 { 00:16:39.205 "name": "BaseBdev2", 00:16:39.205 "uuid": "f25fadd6-3694-577f-89b7-102e44029bdd", 00:16:39.205 "is_configured": true, 00:16:39.205 "data_offset": 2048, 00:16:39.205 "data_size": 63488 00:16:39.205 } 00:16:39.205 ] 00:16:39.205 }' 00:16:39.205 18:15:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.205 18:15:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:39.772 18:15:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:39.772 18:15:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.772 18:15:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:39.772 18:15:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:39.772 18:15:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.772 18:15:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.772 18:15:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.772 18:15:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.772 18:15:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:39.772 18:15:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.772 18:15:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.772 "name": "raid_bdev1", 00:16:39.772 "uuid": "9448ce15-c66e-4ae6-867b-a7ba35de0875", 00:16:39.772 "strip_size_kb": 0, 00:16:39.772 "state": "online", 00:16:39.772 "raid_level": "raid1", 00:16:39.772 "superblock": true, 00:16:39.772 "num_base_bdevs": 2, 00:16:39.772 "num_base_bdevs_discovered": 1, 00:16:39.772 "num_base_bdevs_operational": 1, 00:16:39.772 "base_bdevs_list": [ 00:16:39.772 { 00:16:39.772 "name": null, 00:16:39.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.772 "is_configured": false, 00:16:39.772 "data_offset": 0, 00:16:39.772 "data_size": 63488 00:16:39.772 }, 00:16:39.772 { 00:16:39.772 "name": "BaseBdev2", 00:16:39.772 "uuid": "f25fadd6-3694-577f-89b7-102e44029bdd", 00:16:39.772 "is_configured": true, 00:16:39.772 "data_offset": 2048, 00:16:39.772 "data_size": 63488 00:16:39.772 } 00:16:39.772 ] 00:16:39.772 }' 00:16:39.772 18:15:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.772 18:15:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:39.772 18:15:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.772 18:15:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:39.772 18:15:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:39.772 18:15:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.772 18:15:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:39.772 18:15:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.772 18:15:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:39.772 18:15:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.772 18:15:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:39.772 [2024-12-06 18:15:05.186132] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:39.772 [2024-12-06 18:15:05.186197] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:39.772 [2024-12-06 18:15:05.186235] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:39.772 [2024-12-06 18:15:05.186252] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:39.772 [2024-12-06 18:15:05.186839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:39.772 [2024-12-06 18:15:05.186871] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:39.772 [2024-12-06 18:15:05.186971] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:39.772 [2024-12-06 18:15:05.186993] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:39.772 [2024-12-06 18:15:05.187016] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:39.772 [2024-12-06 18:15:05.187029] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:39.772 BaseBdev1 00:16:39.772 18:15:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.772 18:15:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:40.705 18:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:40.705 18:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:40.705 18:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:40.705 18:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:40.705 18:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:40.705 18:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:40.705 18:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.705 18:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.705 18:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.705 18:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.705 18:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.706 18:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.706 18:15:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.706 18:15:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:40.706 18:15:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.964 18:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.964 "name": "raid_bdev1", 00:16:40.964 "uuid": "9448ce15-c66e-4ae6-867b-a7ba35de0875", 00:16:40.964 "strip_size_kb": 0, 00:16:40.964 "state": "online", 00:16:40.964 "raid_level": "raid1", 00:16:40.964 "superblock": true, 00:16:40.964 "num_base_bdevs": 2, 00:16:40.964 "num_base_bdevs_discovered": 1, 00:16:40.964 "num_base_bdevs_operational": 1, 00:16:40.964 "base_bdevs_list": [ 00:16:40.964 { 00:16:40.964 "name": null, 00:16:40.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.964 "is_configured": false, 00:16:40.964 "data_offset": 0, 00:16:40.964 "data_size": 63488 00:16:40.964 }, 00:16:40.964 { 00:16:40.964 "name": "BaseBdev2", 00:16:40.964 "uuid": "f25fadd6-3694-577f-89b7-102e44029bdd", 00:16:40.964 "is_configured": true, 00:16:40.964 "data_offset": 2048, 00:16:40.964 "data_size": 63488 00:16:40.964 } 00:16:40.964 ] 00:16:40.964 }' 00:16:40.964 18:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.964 18:15:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:41.223 18:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:41.223 18:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:41.223 18:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:41.223 18:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:41.223 18:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:41.223 18:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.223 18:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.223 18:15:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.223 18:15:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:41.223 18:15:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.482 18:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:41.482 "name": "raid_bdev1", 00:16:41.482 "uuid": "9448ce15-c66e-4ae6-867b-a7ba35de0875", 00:16:41.482 "strip_size_kb": 0, 00:16:41.482 "state": "online", 00:16:41.482 "raid_level": "raid1", 00:16:41.482 "superblock": true, 00:16:41.482 "num_base_bdevs": 2, 00:16:41.482 "num_base_bdevs_discovered": 1, 00:16:41.482 "num_base_bdevs_operational": 1, 00:16:41.482 "base_bdevs_list": [ 00:16:41.482 { 00:16:41.482 "name": null, 00:16:41.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.482 "is_configured": false, 00:16:41.482 "data_offset": 0, 00:16:41.482 "data_size": 63488 00:16:41.482 }, 00:16:41.482 { 00:16:41.482 "name": "BaseBdev2", 00:16:41.482 "uuid": "f25fadd6-3694-577f-89b7-102e44029bdd", 00:16:41.482 "is_configured": true, 00:16:41.482 "data_offset": 2048, 00:16:41.482 "data_size": 63488 00:16:41.482 } 00:16:41.482 ] 00:16:41.482 }' 00:16:41.482 18:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:41.482 18:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:41.482 18:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:41.482 18:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:41.482 18:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:41.482 18:15:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:16:41.482 18:15:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:41.482 18:15:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:41.482 18:15:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:41.482 18:15:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:41.482 18:15:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:41.482 18:15:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:41.482 18:15:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.482 18:15:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:41.482 [2024-12-06 18:15:06.878908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:41.482 [2024-12-06 18:15:06.879250] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:41.482 [2024-12-06 18:15:06.879284] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:41.482 request: 00:16:41.482 { 00:16:41.482 "base_bdev": "BaseBdev1", 00:16:41.482 "raid_bdev": "raid_bdev1", 00:16:41.482 "method": "bdev_raid_add_base_bdev", 00:16:41.482 "req_id": 1 00:16:41.482 } 00:16:41.482 Got JSON-RPC error response 00:16:41.482 response: 00:16:41.482 { 00:16:41.482 "code": -22, 00:16:41.482 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:41.482 } 00:16:41.482 18:15:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:41.482 18:15:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:16:41.482 18:15:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:41.482 18:15:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:41.482 18:15:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:41.482 18:15:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:42.419 18:15:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:42.419 18:15:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.419 18:15:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:42.419 18:15:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:42.419 18:15:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:42.419 18:15:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:42.419 18:15:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.419 18:15:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.419 18:15:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.419 18:15:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.419 18:15:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.419 18:15:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.419 18:15:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.419 18:15:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:42.419 18:15:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.678 18:15:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.678 "name": "raid_bdev1", 00:16:42.678 "uuid": "9448ce15-c66e-4ae6-867b-a7ba35de0875", 00:16:42.678 "strip_size_kb": 0, 00:16:42.678 "state": "online", 00:16:42.678 "raid_level": "raid1", 00:16:42.678 "superblock": true, 00:16:42.678 "num_base_bdevs": 2, 00:16:42.678 "num_base_bdevs_discovered": 1, 00:16:42.678 "num_base_bdevs_operational": 1, 00:16:42.678 "base_bdevs_list": [ 00:16:42.678 { 00:16:42.678 "name": null, 00:16:42.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.678 "is_configured": false, 00:16:42.678 "data_offset": 0, 00:16:42.678 "data_size": 63488 00:16:42.678 }, 00:16:42.678 { 00:16:42.678 "name": "BaseBdev2", 00:16:42.678 "uuid": "f25fadd6-3694-577f-89b7-102e44029bdd", 00:16:42.678 "is_configured": true, 00:16:42.678 "data_offset": 2048, 00:16:42.678 "data_size": 63488 00:16:42.678 } 00:16:42.678 ] 00:16:42.678 }' 00:16:42.678 18:15:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.678 18:15:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:42.936 18:15:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:42.936 18:15:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:42.936 18:15:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:42.936 18:15:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:42.936 18:15:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:42.936 18:15:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.936 18:15:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.936 18:15:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.936 18:15:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:42.936 18:15:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.195 18:15:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:43.195 "name": "raid_bdev1", 00:16:43.195 "uuid": "9448ce15-c66e-4ae6-867b-a7ba35de0875", 00:16:43.195 "strip_size_kb": 0, 00:16:43.195 "state": "online", 00:16:43.195 "raid_level": "raid1", 00:16:43.195 "superblock": true, 00:16:43.195 "num_base_bdevs": 2, 00:16:43.195 "num_base_bdevs_discovered": 1, 00:16:43.195 "num_base_bdevs_operational": 1, 00:16:43.195 "base_bdevs_list": [ 00:16:43.195 { 00:16:43.195 "name": null, 00:16:43.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.195 "is_configured": false, 00:16:43.195 "data_offset": 0, 00:16:43.195 "data_size": 63488 00:16:43.195 }, 00:16:43.195 { 00:16:43.195 "name": "BaseBdev2", 00:16:43.195 "uuid": "f25fadd6-3694-577f-89b7-102e44029bdd", 00:16:43.195 "is_configured": true, 00:16:43.195 "data_offset": 2048, 00:16:43.195 "data_size": 63488 00:16:43.195 } 00:16:43.195 ] 00:16:43.195 }' 00:16:43.195 18:15:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:43.195 18:15:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:43.195 18:15:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:43.195 18:15:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:43.195 18:15:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77126 00:16:43.195 18:15:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 77126 ']' 00:16:43.195 18:15:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 77126 00:16:43.195 18:15:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:16:43.195 18:15:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:43.195 18:15:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77126 00:16:43.195 killing process with pid 77126 00:16:43.195 Received shutdown signal, test time was about 18.108653 seconds 00:16:43.195 00:16:43.195 Latency(us) 00:16:43.195 [2024-12-06T18:15:08.715Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:43.195 [2024-12-06T18:15:08.715Z] =================================================================================================================== 00:16:43.195 [2024-12-06T18:15:08.715Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:43.195 18:15:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:43.195 18:15:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:43.195 18:15:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77126' 00:16:43.195 18:15:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 77126 00:16:43.195 [2024-12-06 18:15:08.601873] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:43.195 18:15:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 77126 00:16:43.195 [2024-12-06 18:15:08.602032] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:43.195 [2024-12-06 18:15:08.602101] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:43.195 [2024-12-06 18:15:08.602128] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:43.454 [2024-12-06 18:15:08.817121] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:44.468 18:15:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:16:44.468 00:16:44.468 real 0m21.575s 00:16:44.468 user 0m29.470s 00:16:44.468 sys 0m1.943s 00:16:44.468 18:15:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:44.468 18:15:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:44.468 ************************************ 00:16:44.468 END TEST raid_rebuild_test_sb_io 00:16:44.468 ************************************ 00:16:44.468 18:15:09 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:16:44.468 18:15:09 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:16:44.468 18:15:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:44.468 18:15:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:44.468 18:15:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:44.468 ************************************ 00:16:44.468 START TEST raid_rebuild_test 00:16:44.468 ************************************ 00:16:44.468 18:15:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:16:44.468 18:15:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:44.468 18:15:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:44.468 18:15:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:44.468 18:15:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:44.468 18:15:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:44.468 18:15:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:44.468 18:15:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:44.468 18:15:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:44.468 18:15:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:44.468 18:15:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:44.468 18:15:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:44.468 18:15:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:44.468 18:15:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:44.468 18:15:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:44.468 18:15:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:44.468 18:15:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:44.468 18:15:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:44.468 18:15:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:44.468 18:15:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:44.468 18:15:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:44.468 18:15:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:44.468 18:15:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:44.468 18:15:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:44.468 18:15:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:44.468 18:15:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:44.468 18:15:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:44.468 18:15:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:44.468 18:15:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:44.468 18:15:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:44.727 18:15:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77827 00:16:44.727 18:15:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:44.727 18:15:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77827 00:16:44.727 18:15:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77827 ']' 00:16:44.727 18:15:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.727 18:15:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:44.727 18:15:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.727 18:15:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:44.727 18:15:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.727 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:44.727 Zero copy mechanism will not be used. 00:16:44.727 [2024-12-06 18:15:10.095033] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:16:44.727 [2024-12-06 18:15:10.095221] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77827 ] 00:16:44.986 [2024-12-06 18:15:10.286119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.986 [2024-12-06 18:15:10.440750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.244 [2024-12-06 18:15:10.645917] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:45.244 [2024-12-06 18:15:10.646161] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:45.809 18:15:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:45.809 18:15:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:16:45.809 18:15:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:45.809 18:15:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:45.809 18:15:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.809 18:15:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.809 BaseBdev1_malloc 00:16:45.809 18:15:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.809 18:15:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:45.809 18:15:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.809 18:15:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.809 [2024-12-06 18:15:11.190110] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:45.809 [2024-12-06 18:15:11.190185] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.809 [2024-12-06 18:15:11.190217] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:45.809 [2024-12-06 18:15:11.190236] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.809 [2024-12-06 18:15:11.193004] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.809 [2024-12-06 18:15:11.193058] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:45.809 BaseBdev1 00:16:45.809 18:15:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.809 18:15:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:45.809 18:15:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:45.809 18:15:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.809 18:15:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.809 BaseBdev2_malloc 00:16:45.809 18:15:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.809 18:15:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:45.809 18:15:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.809 18:15:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.809 [2024-12-06 18:15:11.237623] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:45.809 [2024-12-06 18:15:11.237698] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.809 [2024-12-06 18:15:11.237730] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:45.809 [2024-12-06 18:15:11.237748] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.809 [2024-12-06 18:15:11.240456] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.809 [2024-12-06 18:15:11.240512] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:45.809 BaseBdev2 00:16:45.809 18:15:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.809 18:15:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:45.809 18:15:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:45.809 18:15:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.809 18:15:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.809 BaseBdev3_malloc 00:16:45.809 18:15:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.809 18:15:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:45.809 18:15:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.809 18:15:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.809 [2024-12-06 18:15:11.298105] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:45.809 [2024-12-06 18:15:11.298175] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.809 [2024-12-06 18:15:11.298206] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:45.809 [2024-12-06 18:15:11.298224] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.809 [2024-12-06 18:15:11.300922] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.809 [2024-12-06 18:15:11.300973] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:45.809 BaseBdev3 00:16:45.809 18:15:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.809 18:15:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:45.809 18:15:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:45.809 18:15:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.809 18:15:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.066 BaseBdev4_malloc 00:16:46.066 18:15:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.066 18:15:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:46.066 18:15:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.066 18:15:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.066 [2024-12-06 18:15:11.349836] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:46.066 [2024-12-06 18:15:11.349912] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.066 [2024-12-06 18:15:11.349942] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:46.066 [2024-12-06 18:15:11.349959] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.066 [2024-12-06 18:15:11.352583] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.066 [2024-12-06 18:15:11.352792] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:46.066 BaseBdev4 00:16:46.066 18:15:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.066 18:15:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:46.066 18:15:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.066 18:15:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.066 spare_malloc 00:16:46.066 18:15:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.066 18:15:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:46.066 18:15:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.066 18:15:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.066 spare_delay 00:16:46.066 18:15:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.066 18:15:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:46.066 18:15:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.066 18:15:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.066 [2024-12-06 18:15:11.413273] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:46.066 [2024-12-06 18:15:11.413341] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.066 [2024-12-06 18:15:11.413368] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:46.066 [2024-12-06 18:15:11.413385] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.066 [2024-12-06 18:15:11.416140] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.066 [2024-12-06 18:15:11.416327] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:46.066 spare 00:16:46.066 18:15:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.066 18:15:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:46.066 18:15:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.066 18:15:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.066 [2024-12-06 18:15:11.421323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:46.066 [2024-12-06 18:15:11.423713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:46.066 [2024-12-06 18:15:11.423833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:46.066 [2024-12-06 18:15:11.423919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:46.066 [2024-12-06 18:15:11.424041] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:46.066 [2024-12-06 18:15:11.424064] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:46.066 [2024-12-06 18:15:11.424386] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:46.066 [2024-12-06 18:15:11.424612] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:46.066 [2024-12-06 18:15:11.424632] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:46.066 [2024-12-06 18:15:11.424835] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:46.066 18:15:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.066 18:15:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:46.066 18:15:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:46.066 18:15:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:46.066 18:15:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:46.066 18:15:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:46.066 18:15:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:46.066 18:15:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.066 18:15:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.066 18:15:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.066 18:15:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.066 18:15:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.066 18:15:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.066 18:15:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.066 18:15:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.066 18:15:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.066 18:15:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.066 "name": "raid_bdev1", 00:16:46.066 "uuid": "591c6403-09e3-4621-b3c4-f307a50606aa", 00:16:46.066 "strip_size_kb": 0, 00:16:46.066 "state": "online", 00:16:46.066 "raid_level": "raid1", 00:16:46.066 "superblock": false, 00:16:46.066 "num_base_bdevs": 4, 00:16:46.066 "num_base_bdevs_discovered": 4, 00:16:46.066 "num_base_bdevs_operational": 4, 00:16:46.066 "base_bdevs_list": [ 00:16:46.066 { 00:16:46.066 "name": "BaseBdev1", 00:16:46.066 "uuid": "abc8c606-5754-5bbf-92bc-cecea91f0c7f", 00:16:46.066 "is_configured": true, 00:16:46.066 "data_offset": 0, 00:16:46.066 "data_size": 65536 00:16:46.066 }, 00:16:46.066 { 00:16:46.066 "name": "BaseBdev2", 00:16:46.066 "uuid": "6a5296a2-1723-5bca-ab4a-8f95fcc878af", 00:16:46.066 "is_configured": true, 00:16:46.066 "data_offset": 0, 00:16:46.066 "data_size": 65536 00:16:46.066 }, 00:16:46.066 { 00:16:46.066 "name": "BaseBdev3", 00:16:46.066 "uuid": "a6834f44-3089-539f-98b2-3c878de41d63", 00:16:46.067 "is_configured": true, 00:16:46.067 "data_offset": 0, 00:16:46.067 "data_size": 65536 00:16:46.067 }, 00:16:46.067 { 00:16:46.067 "name": "BaseBdev4", 00:16:46.067 "uuid": "8e0adaac-927e-5cee-b411-6ebd3e047092", 00:16:46.067 "is_configured": true, 00:16:46.067 "data_offset": 0, 00:16:46.067 "data_size": 65536 00:16:46.067 } 00:16:46.067 ] 00:16:46.067 }' 00:16:46.067 18:15:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.067 18:15:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.631 18:15:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:46.631 18:15:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:46.631 18:15:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.631 18:15:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.631 [2024-12-06 18:15:11.929894] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:46.631 18:15:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.631 18:15:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:16:46.631 18:15:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.631 18:15:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:46.631 18:15:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.631 18:15:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.631 18:15:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.632 18:15:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:46.632 18:15:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:46.632 18:15:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:46.632 18:15:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:46.632 18:15:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:46.632 18:15:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:46.632 18:15:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:46.632 18:15:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:46.632 18:15:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:46.632 18:15:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:46.632 18:15:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:46.632 18:15:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:46.632 18:15:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:46.632 18:15:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:46.889 [2024-12-06 18:15:12.305630] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:46.889 /dev/nbd0 00:16:46.889 18:15:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:46.889 18:15:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:46.889 18:15:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:46.889 18:15:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:46.889 18:15:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:46.889 18:15:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:46.889 18:15:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:46.889 18:15:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:46.889 18:15:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:46.889 18:15:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:46.889 18:15:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:46.889 1+0 records in 00:16:46.889 1+0 records out 00:16:46.889 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346165 s, 11.8 MB/s 00:16:46.889 18:15:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:46.889 18:15:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:46.889 18:15:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:46.889 18:15:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:46.889 18:15:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:46.889 18:15:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:46.889 18:15:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:46.889 18:15:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:46.889 18:15:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:46.889 18:15:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:16:56.861 65536+0 records in 00:16:56.861 65536+0 records out 00:16:56.861 33554432 bytes (34 MB, 32 MiB) copied, 8.17844 s, 4.1 MB/s 00:16:56.861 18:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:56.861 18:15:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:56.862 18:15:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:56.862 18:15:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:56.862 18:15:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:56.862 18:15:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:56.862 18:15:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:56.862 [2024-12-06 18:15:20.838273] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:56.862 18:15:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:56.862 18:15:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:56.862 18:15:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:56.862 18:15:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:56.862 18:15:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:56.862 18:15:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:56.862 18:15:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:56.862 18:15:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:56.862 18:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:56.862 18:15:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.862 18:15:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.862 [2024-12-06 18:15:20.875909] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:56.862 18:15:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.862 18:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:56.862 18:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:56.862 18:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:56.862 18:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:56.862 18:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:56.862 18:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:56.862 18:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.862 18:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.862 18:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.862 18:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.862 18:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.862 18:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.862 18:15:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.862 18:15:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.862 18:15:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.862 18:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.862 "name": "raid_bdev1", 00:16:56.862 "uuid": "591c6403-09e3-4621-b3c4-f307a50606aa", 00:16:56.862 "strip_size_kb": 0, 00:16:56.862 "state": "online", 00:16:56.862 "raid_level": "raid1", 00:16:56.862 "superblock": false, 00:16:56.862 "num_base_bdevs": 4, 00:16:56.862 "num_base_bdevs_discovered": 3, 00:16:56.862 "num_base_bdevs_operational": 3, 00:16:56.862 "base_bdevs_list": [ 00:16:56.862 { 00:16:56.862 "name": null, 00:16:56.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.862 "is_configured": false, 00:16:56.862 "data_offset": 0, 00:16:56.862 "data_size": 65536 00:16:56.862 }, 00:16:56.862 { 00:16:56.862 "name": "BaseBdev2", 00:16:56.862 "uuid": "6a5296a2-1723-5bca-ab4a-8f95fcc878af", 00:16:56.862 "is_configured": true, 00:16:56.862 "data_offset": 0, 00:16:56.862 "data_size": 65536 00:16:56.862 }, 00:16:56.862 { 00:16:56.862 "name": "BaseBdev3", 00:16:56.862 "uuid": "a6834f44-3089-539f-98b2-3c878de41d63", 00:16:56.862 "is_configured": true, 00:16:56.862 "data_offset": 0, 00:16:56.862 "data_size": 65536 00:16:56.862 }, 00:16:56.862 { 00:16:56.862 "name": "BaseBdev4", 00:16:56.862 "uuid": "8e0adaac-927e-5cee-b411-6ebd3e047092", 00:16:56.862 "is_configured": true, 00:16:56.862 "data_offset": 0, 00:16:56.862 "data_size": 65536 00:16:56.862 } 00:16:56.862 ] 00:16:56.862 }' 00:16:56.862 18:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.862 18:15:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.862 18:15:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:56.862 18:15:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.862 18:15:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.862 [2024-12-06 18:15:21.396045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:56.862 [2024-12-06 18:15:21.411135] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:16:56.862 18:15:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.862 18:15:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:56.862 [2024-12-06 18:15:21.413830] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:57.121 18:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:57.121 18:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.121 18:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:57.121 18:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:57.121 18:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.121 18:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.121 18:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.121 18:15:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.121 18:15:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.121 18:15:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.121 18:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.121 "name": "raid_bdev1", 00:16:57.121 "uuid": "591c6403-09e3-4621-b3c4-f307a50606aa", 00:16:57.121 "strip_size_kb": 0, 00:16:57.121 "state": "online", 00:16:57.121 "raid_level": "raid1", 00:16:57.121 "superblock": false, 00:16:57.121 "num_base_bdevs": 4, 00:16:57.121 "num_base_bdevs_discovered": 4, 00:16:57.121 "num_base_bdevs_operational": 4, 00:16:57.121 "process": { 00:16:57.121 "type": "rebuild", 00:16:57.121 "target": "spare", 00:16:57.121 "progress": { 00:16:57.121 "blocks": 20480, 00:16:57.121 "percent": 31 00:16:57.121 } 00:16:57.121 }, 00:16:57.121 "base_bdevs_list": [ 00:16:57.121 { 00:16:57.121 "name": "spare", 00:16:57.121 "uuid": "2b94f14f-94e7-5e68-8304-45cf7ce4b515", 00:16:57.121 "is_configured": true, 00:16:57.121 "data_offset": 0, 00:16:57.121 "data_size": 65536 00:16:57.121 }, 00:16:57.121 { 00:16:57.121 "name": "BaseBdev2", 00:16:57.121 "uuid": "6a5296a2-1723-5bca-ab4a-8f95fcc878af", 00:16:57.121 "is_configured": true, 00:16:57.121 "data_offset": 0, 00:16:57.121 "data_size": 65536 00:16:57.121 }, 00:16:57.122 { 00:16:57.122 "name": "BaseBdev3", 00:16:57.122 "uuid": "a6834f44-3089-539f-98b2-3c878de41d63", 00:16:57.122 "is_configured": true, 00:16:57.122 "data_offset": 0, 00:16:57.122 "data_size": 65536 00:16:57.122 }, 00:16:57.122 { 00:16:57.122 "name": "BaseBdev4", 00:16:57.122 "uuid": "8e0adaac-927e-5cee-b411-6ebd3e047092", 00:16:57.122 "is_configured": true, 00:16:57.122 "data_offset": 0, 00:16:57.122 "data_size": 65536 00:16:57.122 } 00:16:57.122 ] 00:16:57.122 }' 00:16:57.122 18:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.122 18:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:57.122 18:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.122 18:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:57.122 18:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:57.122 18:15:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.122 18:15:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.122 [2024-12-06 18:15:22.583469] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:57.122 [2024-12-06 18:15:22.623411] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:57.122 [2024-12-06 18:15:22.623510] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:57.122 [2024-12-06 18:15:22.623535] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:57.122 [2024-12-06 18:15:22.623549] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:57.381 18:15:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.381 18:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:57.381 18:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:57.381 18:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:57.381 18:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:57.381 18:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:57.381 18:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:57.381 18:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.381 18:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.381 18:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.381 18:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.381 18:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.381 18:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.381 18:15:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.381 18:15:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.381 18:15:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.381 18:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.381 "name": "raid_bdev1", 00:16:57.381 "uuid": "591c6403-09e3-4621-b3c4-f307a50606aa", 00:16:57.381 "strip_size_kb": 0, 00:16:57.381 "state": "online", 00:16:57.381 "raid_level": "raid1", 00:16:57.381 "superblock": false, 00:16:57.381 "num_base_bdevs": 4, 00:16:57.381 "num_base_bdevs_discovered": 3, 00:16:57.381 "num_base_bdevs_operational": 3, 00:16:57.381 "base_bdevs_list": [ 00:16:57.381 { 00:16:57.381 "name": null, 00:16:57.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.381 "is_configured": false, 00:16:57.381 "data_offset": 0, 00:16:57.381 "data_size": 65536 00:16:57.381 }, 00:16:57.381 { 00:16:57.381 "name": "BaseBdev2", 00:16:57.381 "uuid": "6a5296a2-1723-5bca-ab4a-8f95fcc878af", 00:16:57.381 "is_configured": true, 00:16:57.381 "data_offset": 0, 00:16:57.381 "data_size": 65536 00:16:57.381 }, 00:16:57.381 { 00:16:57.381 "name": "BaseBdev3", 00:16:57.381 "uuid": "a6834f44-3089-539f-98b2-3c878de41d63", 00:16:57.381 "is_configured": true, 00:16:57.381 "data_offset": 0, 00:16:57.381 "data_size": 65536 00:16:57.381 }, 00:16:57.381 { 00:16:57.381 "name": "BaseBdev4", 00:16:57.381 "uuid": "8e0adaac-927e-5cee-b411-6ebd3e047092", 00:16:57.381 "is_configured": true, 00:16:57.381 "data_offset": 0, 00:16:57.381 "data_size": 65536 00:16:57.381 } 00:16:57.381 ] 00:16:57.381 }' 00:16:57.381 18:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.381 18:15:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.950 18:15:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:57.950 18:15:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.950 18:15:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:57.950 18:15:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:57.950 18:15:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.950 18:15:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.950 18:15:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.950 18:15:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.950 18:15:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.950 18:15:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.950 18:15:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.950 "name": "raid_bdev1", 00:16:57.951 "uuid": "591c6403-09e3-4621-b3c4-f307a50606aa", 00:16:57.951 "strip_size_kb": 0, 00:16:57.951 "state": "online", 00:16:57.951 "raid_level": "raid1", 00:16:57.951 "superblock": false, 00:16:57.951 "num_base_bdevs": 4, 00:16:57.951 "num_base_bdevs_discovered": 3, 00:16:57.951 "num_base_bdevs_operational": 3, 00:16:57.951 "base_bdevs_list": [ 00:16:57.951 { 00:16:57.951 "name": null, 00:16:57.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.951 "is_configured": false, 00:16:57.951 "data_offset": 0, 00:16:57.951 "data_size": 65536 00:16:57.951 }, 00:16:57.951 { 00:16:57.951 "name": "BaseBdev2", 00:16:57.951 "uuid": "6a5296a2-1723-5bca-ab4a-8f95fcc878af", 00:16:57.951 "is_configured": true, 00:16:57.951 "data_offset": 0, 00:16:57.951 "data_size": 65536 00:16:57.951 }, 00:16:57.951 { 00:16:57.951 "name": "BaseBdev3", 00:16:57.951 "uuid": "a6834f44-3089-539f-98b2-3c878de41d63", 00:16:57.951 "is_configured": true, 00:16:57.951 "data_offset": 0, 00:16:57.951 "data_size": 65536 00:16:57.951 }, 00:16:57.951 { 00:16:57.951 "name": "BaseBdev4", 00:16:57.951 "uuid": "8e0adaac-927e-5cee-b411-6ebd3e047092", 00:16:57.951 "is_configured": true, 00:16:57.951 "data_offset": 0, 00:16:57.951 "data_size": 65536 00:16:57.951 } 00:16:57.951 ] 00:16:57.951 }' 00:16:57.951 18:15:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.951 18:15:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:57.951 18:15:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.951 18:15:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:57.951 18:15:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:57.951 18:15:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.951 18:15:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.951 [2024-12-06 18:15:23.332029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:57.951 [2024-12-06 18:15:23.346096] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:16:57.951 18:15:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.951 18:15:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:57.951 [2024-12-06 18:15:23.348650] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:58.884 18:15:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:58.884 18:15:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:58.884 18:15:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:58.884 18:15:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:58.884 18:15:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:58.884 18:15:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.884 18:15:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.884 18:15:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.884 18:15:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.884 18:15:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.142 18:15:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.142 "name": "raid_bdev1", 00:16:59.142 "uuid": "591c6403-09e3-4621-b3c4-f307a50606aa", 00:16:59.142 "strip_size_kb": 0, 00:16:59.142 "state": "online", 00:16:59.142 "raid_level": "raid1", 00:16:59.142 "superblock": false, 00:16:59.142 "num_base_bdevs": 4, 00:16:59.142 "num_base_bdevs_discovered": 4, 00:16:59.142 "num_base_bdevs_operational": 4, 00:16:59.142 "process": { 00:16:59.142 "type": "rebuild", 00:16:59.142 "target": "spare", 00:16:59.142 "progress": { 00:16:59.142 "blocks": 20480, 00:16:59.142 "percent": 31 00:16:59.142 } 00:16:59.142 }, 00:16:59.142 "base_bdevs_list": [ 00:16:59.143 { 00:16:59.143 "name": "spare", 00:16:59.143 "uuid": "2b94f14f-94e7-5e68-8304-45cf7ce4b515", 00:16:59.143 "is_configured": true, 00:16:59.143 "data_offset": 0, 00:16:59.143 "data_size": 65536 00:16:59.143 }, 00:16:59.143 { 00:16:59.143 "name": "BaseBdev2", 00:16:59.143 "uuid": "6a5296a2-1723-5bca-ab4a-8f95fcc878af", 00:16:59.143 "is_configured": true, 00:16:59.143 "data_offset": 0, 00:16:59.143 "data_size": 65536 00:16:59.143 }, 00:16:59.143 { 00:16:59.143 "name": "BaseBdev3", 00:16:59.143 "uuid": "a6834f44-3089-539f-98b2-3c878de41d63", 00:16:59.143 "is_configured": true, 00:16:59.143 "data_offset": 0, 00:16:59.143 "data_size": 65536 00:16:59.143 }, 00:16:59.143 { 00:16:59.143 "name": "BaseBdev4", 00:16:59.143 "uuid": "8e0adaac-927e-5cee-b411-6ebd3e047092", 00:16:59.143 "is_configured": true, 00:16:59.143 "data_offset": 0, 00:16:59.143 "data_size": 65536 00:16:59.143 } 00:16:59.143 ] 00:16:59.143 }' 00:16:59.143 18:15:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.143 18:15:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:59.143 18:15:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.143 18:15:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:59.143 18:15:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:59.143 18:15:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:59.143 18:15:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:59.143 18:15:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:16:59.143 18:15:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:59.143 18:15:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.143 18:15:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.143 [2024-12-06 18:15:24.518119] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:59.143 [2024-12-06 18:15:24.557318] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:16:59.143 18:15:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.143 18:15:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:16:59.143 18:15:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:16:59.143 18:15:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:59.143 18:15:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:59.143 18:15:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:59.143 18:15:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:59.143 18:15:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:59.143 18:15:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.143 18:15:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.143 18:15:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.143 18:15:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.143 18:15:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.143 18:15:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.143 "name": "raid_bdev1", 00:16:59.143 "uuid": "591c6403-09e3-4621-b3c4-f307a50606aa", 00:16:59.143 "strip_size_kb": 0, 00:16:59.143 "state": "online", 00:16:59.143 "raid_level": "raid1", 00:16:59.143 "superblock": false, 00:16:59.143 "num_base_bdevs": 4, 00:16:59.143 "num_base_bdevs_discovered": 3, 00:16:59.143 "num_base_bdevs_operational": 3, 00:16:59.143 "process": { 00:16:59.143 "type": "rebuild", 00:16:59.143 "target": "spare", 00:16:59.143 "progress": { 00:16:59.143 "blocks": 24576, 00:16:59.143 "percent": 37 00:16:59.143 } 00:16:59.143 }, 00:16:59.143 "base_bdevs_list": [ 00:16:59.143 { 00:16:59.143 "name": "spare", 00:16:59.143 "uuid": "2b94f14f-94e7-5e68-8304-45cf7ce4b515", 00:16:59.143 "is_configured": true, 00:16:59.143 "data_offset": 0, 00:16:59.143 "data_size": 65536 00:16:59.143 }, 00:16:59.143 { 00:16:59.143 "name": null, 00:16:59.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.143 "is_configured": false, 00:16:59.143 "data_offset": 0, 00:16:59.143 "data_size": 65536 00:16:59.143 }, 00:16:59.143 { 00:16:59.143 "name": "BaseBdev3", 00:16:59.143 "uuid": "a6834f44-3089-539f-98b2-3c878de41d63", 00:16:59.143 "is_configured": true, 00:16:59.143 "data_offset": 0, 00:16:59.143 "data_size": 65536 00:16:59.143 }, 00:16:59.143 { 00:16:59.143 "name": "BaseBdev4", 00:16:59.143 "uuid": "8e0adaac-927e-5cee-b411-6ebd3e047092", 00:16:59.143 "is_configured": true, 00:16:59.143 "data_offset": 0, 00:16:59.143 "data_size": 65536 00:16:59.143 } 00:16:59.143 ] 00:16:59.143 }' 00:16:59.143 18:15:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.402 18:15:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:59.402 18:15:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.402 18:15:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:59.402 18:15:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=480 00:16:59.402 18:15:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:59.402 18:15:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:59.402 18:15:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:59.402 18:15:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:59.402 18:15:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:59.402 18:15:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:59.402 18:15:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.402 18:15:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.402 18:15:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.402 18:15:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.402 18:15:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.402 18:15:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.402 "name": "raid_bdev1", 00:16:59.402 "uuid": "591c6403-09e3-4621-b3c4-f307a50606aa", 00:16:59.402 "strip_size_kb": 0, 00:16:59.402 "state": "online", 00:16:59.402 "raid_level": "raid1", 00:16:59.402 "superblock": false, 00:16:59.402 "num_base_bdevs": 4, 00:16:59.402 "num_base_bdevs_discovered": 3, 00:16:59.402 "num_base_bdevs_operational": 3, 00:16:59.402 "process": { 00:16:59.402 "type": "rebuild", 00:16:59.402 "target": "spare", 00:16:59.402 "progress": { 00:16:59.402 "blocks": 26624, 00:16:59.402 "percent": 40 00:16:59.402 } 00:16:59.402 }, 00:16:59.402 "base_bdevs_list": [ 00:16:59.402 { 00:16:59.402 "name": "spare", 00:16:59.402 "uuid": "2b94f14f-94e7-5e68-8304-45cf7ce4b515", 00:16:59.402 "is_configured": true, 00:16:59.402 "data_offset": 0, 00:16:59.402 "data_size": 65536 00:16:59.402 }, 00:16:59.402 { 00:16:59.402 "name": null, 00:16:59.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.402 "is_configured": false, 00:16:59.402 "data_offset": 0, 00:16:59.402 "data_size": 65536 00:16:59.402 }, 00:16:59.402 { 00:16:59.402 "name": "BaseBdev3", 00:16:59.402 "uuid": "a6834f44-3089-539f-98b2-3c878de41d63", 00:16:59.402 "is_configured": true, 00:16:59.402 "data_offset": 0, 00:16:59.402 "data_size": 65536 00:16:59.402 }, 00:16:59.402 { 00:16:59.402 "name": "BaseBdev4", 00:16:59.402 "uuid": "8e0adaac-927e-5cee-b411-6ebd3e047092", 00:16:59.402 "is_configured": true, 00:16:59.402 "data_offset": 0, 00:16:59.402 "data_size": 65536 00:16:59.402 } 00:16:59.402 ] 00:16:59.402 }' 00:16:59.402 18:15:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.402 18:15:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:59.402 18:15:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.402 18:15:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:59.402 18:15:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:00.794 18:15:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:00.794 18:15:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:00.794 18:15:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:00.794 18:15:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:00.794 18:15:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:00.794 18:15:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:00.794 18:15:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.794 18:15:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.794 18:15:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.794 18:15:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.794 18:15:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.794 18:15:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:00.794 "name": "raid_bdev1", 00:17:00.794 "uuid": "591c6403-09e3-4621-b3c4-f307a50606aa", 00:17:00.794 "strip_size_kb": 0, 00:17:00.794 "state": "online", 00:17:00.794 "raid_level": "raid1", 00:17:00.794 "superblock": false, 00:17:00.794 "num_base_bdevs": 4, 00:17:00.794 "num_base_bdevs_discovered": 3, 00:17:00.794 "num_base_bdevs_operational": 3, 00:17:00.794 "process": { 00:17:00.794 "type": "rebuild", 00:17:00.794 "target": "spare", 00:17:00.794 "progress": { 00:17:00.794 "blocks": 51200, 00:17:00.794 "percent": 78 00:17:00.794 } 00:17:00.794 }, 00:17:00.794 "base_bdevs_list": [ 00:17:00.794 { 00:17:00.794 "name": "spare", 00:17:00.794 "uuid": "2b94f14f-94e7-5e68-8304-45cf7ce4b515", 00:17:00.794 "is_configured": true, 00:17:00.794 "data_offset": 0, 00:17:00.794 "data_size": 65536 00:17:00.794 }, 00:17:00.794 { 00:17:00.794 "name": null, 00:17:00.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.794 "is_configured": false, 00:17:00.794 "data_offset": 0, 00:17:00.794 "data_size": 65536 00:17:00.794 }, 00:17:00.794 { 00:17:00.794 "name": "BaseBdev3", 00:17:00.794 "uuid": "a6834f44-3089-539f-98b2-3c878de41d63", 00:17:00.794 "is_configured": true, 00:17:00.794 "data_offset": 0, 00:17:00.794 "data_size": 65536 00:17:00.794 }, 00:17:00.794 { 00:17:00.794 "name": "BaseBdev4", 00:17:00.794 "uuid": "8e0adaac-927e-5cee-b411-6ebd3e047092", 00:17:00.794 "is_configured": true, 00:17:00.794 "data_offset": 0, 00:17:00.794 "data_size": 65536 00:17:00.794 } 00:17:00.794 ] 00:17:00.794 }' 00:17:00.794 18:15:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:00.794 18:15:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:00.794 18:15:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:00.794 18:15:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:00.794 18:15:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:01.390 [2024-12-06 18:15:26.573780] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:01.390 [2024-12-06 18:15:26.573898] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:01.390 [2024-12-06 18:15:26.573957] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:01.675 18:15:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:01.675 18:15:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:01.675 18:15:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.675 18:15:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:01.675 18:15:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:01.675 18:15:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.675 18:15:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.675 18:15:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.675 18:15:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.675 18:15:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.675 18:15:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.675 18:15:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.675 "name": "raid_bdev1", 00:17:01.675 "uuid": "591c6403-09e3-4621-b3c4-f307a50606aa", 00:17:01.675 "strip_size_kb": 0, 00:17:01.675 "state": "online", 00:17:01.675 "raid_level": "raid1", 00:17:01.675 "superblock": false, 00:17:01.675 "num_base_bdevs": 4, 00:17:01.675 "num_base_bdevs_discovered": 3, 00:17:01.675 "num_base_bdevs_operational": 3, 00:17:01.675 "base_bdevs_list": [ 00:17:01.675 { 00:17:01.675 "name": "spare", 00:17:01.675 "uuid": "2b94f14f-94e7-5e68-8304-45cf7ce4b515", 00:17:01.675 "is_configured": true, 00:17:01.675 "data_offset": 0, 00:17:01.675 "data_size": 65536 00:17:01.675 }, 00:17:01.675 { 00:17:01.675 "name": null, 00:17:01.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.675 "is_configured": false, 00:17:01.675 "data_offset": 0, 00:17:01.676 "data_size": 65536 00:17:01.676 }, 00:17:01.676 { 00:17:01.676 "name": "BaseBdev3", 00:17:01.676 "uuid": "a6834f44-3089-539f-98b2-3c878de41d63", 00:17:01.676 "is_configured": true, 00:17:01.676 "data_offset": 0, 00:17:01.676 "data_size": 65536 00:17:01.676 }, 00:17:01.676 { 00:17:01.676 "name": "BaseBdev4", 00:17:01.676 "uuid": "8e0adaac-927e-5cee-b411-6ebd3e047092", 00:17:01.676 "is_configured": true, 00:17:01.676 "data_offset": 0, 00:17:01.676 "data_size": 65536 00:17:01.676 } 00:17:01.676 ] 00:17:01.676 }' 00:17:01.676 18:15:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:01.676 18:15:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:01.933 18:15:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:01.933 18:15:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:01.933 18:15:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:17:01.933 18:15:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:01.933 18:15:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.933 18:15:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:01.933 18:15:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:01.933 18:15:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.933 18:15:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.933 18:15:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.933 18:15:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.933 18:15:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.933 18:15:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.933 18:15:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.933 "name": "raid_bdev1", 00:17:01.933 "uuid": "591c6403-09e3-4621-b3c4-f307a50606aa", 00:17:01.933 "strip_size_kb": 0, 00:17:01.933 "state": "online", 00:17:01.933 "raid_level": "raid1", 00:17:01.933 "superblock": false, 00:17:01.933 "num_base_bdevs": 4, 00:17:01.933 "num_base_bdevs_discovered": 3, 00:17:01.933 "num_base_bdevs_operational": 3, 00:17:01.933 "base_bdevs_list": [ 00:17:01.933 { 00:17:01.933 "name": "spare", 00:17:01.933 "uuid": "2b94f14f-94e7-5e68-8304-45cf7ce4b515", 00:17:01.933 "is_configured": true, 00:17:01.933 "data_offset": 0, 00:17:01.933 "data_size": 65536 00:17:01.933 }, 00:17:01.933 { 00:17:01.933 "name": null, 00:17:01.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.933 "is_configured": false, 00:17:01.933 "data_offset": 0, 00:17:01.933 "data_size": 65536 00:17:01.933 }, 00:17:01.933 { 00:17:01.933 "name": "BaseBdev3", 00:17:01.933 "uuid": "a6834f44-3089-539f-98b2-3c878de41d63", 00:17:01.933 "is_configured": true, 00:17:01.933 "data_offset": 0, 00:17:01.933 "data_size": 65536 00:17:01.933 }, 00:17:01.933 { 00:17:01.933 "name": "BaseBdev4", 00:17:01.933 "uuid": "8e0adaac-927e-5cee-b411-6ebd3e047092", 00:17:01.933 "is_configured": true, 00:17:01.933 "data_offset": 0, 00:17:01.933 "data_size": 65536 00:17:01.933 } 00:17:01.933 ] 00:17:01.934 }' 00:17:01.934 18:15:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:01.934 18:15:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:01.934 18:15:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:01.934 18:15:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:01.934 18:15:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:01.934 18:15:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:01.934 18:15:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:01.934 18:15:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:01.934 18:15:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:01.934 18:15:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:01.934 18:15:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.934 18:15:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.934 18:15:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.934 18:15:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.934 18:15:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.934 18:15:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.934 18:15:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.934 18:15:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.934 18:15:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.934 18:15:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.934 "name": "raid_bdev1", 00:17:01.934 "uuid": "591c6403-09e3-4621-b3c4-f307a50606aa", 00:17:01.934 "strip_size_kb": 0, 00:17:01.934 "state": "online", 00:17:01.934 "raid_level": "raid1", 00:17:01.934 "superblock": false, 00:17:01.934 "num_base_bdevs": 4, 00:17:01.934 "num_base_bdevs_discovered": 3, 00:17:01.934 "num_base_bdevs_operational": 3, 00:17:01.934 "base_bdevs_list": [ 00:17:01.934 { 00:17:01.934 "name": "spare", 00:17:01.934 "uuid": "2b94f14f-94e7-5e68-8304-45cf7ce4b515", 00:17:01.934 "is_configured": true, 00:17:01.934 "data_offset": 0, 00:17:01.934 "data_size": 65536 00:17:01.934 }, 00:17:01.934 { 00:17:01.934 "name": null, 00:17:01.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.934 "is_configured": false, 00:17:01.934 "data_offset": 0, 00:17:01.934 "data_size": 65536 00:17:01.934 }, 00:17:01.934 { 00:17:01.934 "name": "BaseBdev3", 00:17:01.934 "uuid": "a6834f44-3089-539f-98b2-3c878de41d63", 00:17:01.934 "is_configured": true, 00:17:01.934 "data_offset": 0, 00:17:01.934 "data_size": 65536 00:17:01.934 }, 00:17:01.934 { 00:17:01.934 "name": "BaseBdev4", 00:17:01.934 "uuid": "8e0adaac-927e-5cee-b411-6ebd3e047092", 00:17:01.934 "is_configured": true, 00:17:01.934 "data_offset": 0, 00:17:01.934 "data_size": 65536 00:17:01.934 } 00:17:01.934 ] 00:17:01.934 }' 00:17:01.934 18:15:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.934 18:15:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.499 18:15:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:02.499 18:15:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.499 18:15:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.499 [2024-12-06 18:15:27.921880] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:02.499 [2024-12-06 18:15:27.921921] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:02.499 [2024-12-06 18:15:27.922015] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:02.499 [2024-12-06 18:15:27.922124] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:02.499 [2024-12-06 18:15:27.922151] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:02.499 18:15:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.499 18:15:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.499 18:15:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.499 18:15:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:17:02.499 18:15:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.499 18:15:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.499 18:15:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:02.499 18:15:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:02.499 18:15:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:02.499 18:15:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:02.499 18:15:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:02.499 18:15:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:02.499 18:15:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:02.499 18:15:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:02.499 18:15:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:02.499 18:15:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:02.499 18:15:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:02.499 18:15:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:02.499 18:15:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:03.063 /dev/nbd0 00:17:03.063 18:15:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:03.063 18:15:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:03.063 18:15:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:03.063 18:15:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:03.063 18:15:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:03.063 18:15:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:03.063 18:15:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:03.063 18:15:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:03.063 18:15:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:03.063 18:15:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:03.063 18:15:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:03.063 1+0 records in 00:17:03.063 1+0 records out 00:17:03.063 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262821 s, 15.6 MB/s 00:17:03.063 18:15:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:03.063 18:15:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:03.063 18:15:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:03.063 18:15:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:03.063 18:15:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:03.063 18:15:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:03.063 18:15:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:03.063 18:15:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:03.320 /dev/nbd1 00:17:03.320 18:15:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:03.320 18:15:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:03.320 18:15:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:03.320 18:15:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:03.320 18:15:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:03.320 18:15:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:03.320 18:15:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:03.320 18:15:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:03.320 18:15:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:03.320 18:15:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:03.320 18:15:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:03.320 1+0 records in 00:17:03.320 1+0 records out 00:17:03.320 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420277 s, 9.7 MB/s 00:17:03.320 18:15:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:03.320 18:15:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:03.320 18:15:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:03.320 18:15:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:03.320 18:15:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:03.320 18:15:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:03.320 18:15:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:03.320 18:15:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:03.577 18:15:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:03.577 18:15:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:03.577 18:15:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:03.577 18:15:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:03.577 18:15:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:03.577 18:15:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:03.577 18:15:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:03.834 18:15:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:03.834 18:15:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:03.834 18:15:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:03.834 18:15:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:03.834 18:15:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:03.834 18:15:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:03.834 18:15:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:03.834 18:15:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:03.834 18:15:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:03.834 18:15:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:04.399 18:15:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:04.399 18:15:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:04.399 18:15:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:04.399 18:15:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:04.399 18:15:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:04.399 18:15:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:04.399 18:15:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:04.399 18:15:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:04.399 18:15:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:04.399 18:15:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77827 00:17:04.399 18:15:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77827 ']' 00:17:04.399 18:15:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77827 00:17:04.399 18:15:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:17:04.399 18:15:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:04.399 18:15:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77827 00:17:04.399 18:15:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:04.399 18:15:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:04.399 killing process with pid 77827 00:17:04.399 18:15:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77827' 00:17:04.399 18:15:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77827 00:17:04.399 Received shutdown signal, test time was about 60.000000 seconds 00:17:04.399 00:17:04.399 Latency(us) 00:17:04.399 [2024-12-06T18:15:29.919Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:04.399 [2024-12-06T18:15:29.919Z] =================================================================================================================== 00:17:04.399 [2024-12-06T18:15:29.919Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:04.399 [2024-12-06 18:15:29.695141] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:04.399 18:15:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77827 00:17:04.656 [2024-12-06 18:15:30.146480] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:06.030 18:15:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:17:06.030 00:17:06.030 real 0m21.242s 00:17:06.030 user 0m23.987s 00:17:06.030 sys 0m3.597s 00:17:06.030 18:15:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:06.030 ************************************ 00:17:06.030 END TEST raid_rebuild_test 00:17:06.030 ************************************ 00:17:06.030 18:15:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.030 18:15:31 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:17:06.030 18:15:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:06.030 18:15:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:06.030 18:15:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:06.030 ************************************ 00:17:06.030 START TEST raid_rebuild_test_sb 00:17:06.030 ************************************ 00:17:06.030 18:15:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:17:06.030 18:15:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:06.030 18:15:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:06.030 18:15:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:06.030 18:15:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:06.030 18:15:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:06.030 18:15:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:06.030 18:15:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:06.030 18:15:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:06.030 18:15:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:06.030 18:15:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:06.030 18:15:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:06.030 18:15:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:06.030 18:15:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:06.030 18:15:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:06.030 18:15:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:06.030 18:15:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:06.030 18:15:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:06.030 18:15:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:06.030 18:15:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:06.030 18:15:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:06.030 18:15:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:06.030 18:15:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:06.030 18:15:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:06.030 18:15:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:06.030 18:15:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:06.030 18:15:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:06.030 18:15:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:06.030 18:15:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:06.030 18:15:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:06.030 18:15:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:06.030 18:15:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78312 00:17:06.030 18:15:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:06.030 18:15:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78312 00:17:06.030 18:15:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78312 ']' 00:17:06.030 18:15:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.030 18:15:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:06.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.030 18:15:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.030 18:15:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:06.030 18:15:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.030 [2024-12-06 18:15:31.381443] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:17:06.030 [2024-12-06 18:15:31.381620] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78312 ] 00:17:06.030 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:06.030 Zero copy mechanism will not be used. 00:17:06.288 [2024-12-06 18:15:31.569556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.289 [2024-12-06 18:15:31.720507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.572 [2024-12-06 18:15:31.924725] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:06.572 [2024-12-06 18:15:31.924809] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:07.139 18:15:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:07.139 18:15:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:07.139 18:15:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:07.139 18:15:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:07.139 18:15:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.139 18:15:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.139 BaseBdev1_malloc 00:17:07.139 18:15:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.139 18:15:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:07.139 18:15:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.139 18:15:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.139 [2024-12-06 18:15:32.442782] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:07.139 [2024-12-06 18:15:32.442861] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.139 [2024-12-06 18:15:32.442890] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:07.139 [2024-12-06 18:15:32.442908] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.139 [2024-12-06 18:15:32.445692] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.139 [2024-12-06 18:15:32.445742] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:07.139 BaseBdev1 00:17:07.139 18:15:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.139 18:15:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:07.139 18:15:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:07.139 18:15:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.139 18:15:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.139 BaseBdev2_malloc 00:17:07.139 18:15:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.139 18:15:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:07.139 18:15:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.139 18:15:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.139 [2024-12-06 18:15:32.494566] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:07.139 [2024-12-06 18:15:32.494654] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.139 [2024-12-06 18:15:32.494684] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:07.139 [2024-12-06 18:15:32.494702] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.139 [2024-12-06 18:15:32.497403] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.139 [2024-12-06 18:15:32.497451] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:07.139 BaseBdev2 00:17:07.139 18:15:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.139 18:15:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:07.139 18:15:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:07.139 18:15:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.139 18:15:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.139 BaseBdev3_malloc 00:17:07.139 18:15:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.140 18:15:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:07.140 18:15:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.140 18:15:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.140 [2024-12-06 18:15:32.569409] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:07.140 [2024-12-06 18:15:32.569477] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.140 [2024-12-06 18:15:32.569513] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:07.140 [2024-12-06 18:15:32.569531] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.140 [2024-12-06 18:15:32.572399] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.140 [2024-12-06 18:15:32.572460] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:07.140 BaseBdev3 00:17:07.140 18:15:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.140 18:15:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:07.140 18:15:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:07.140 18:15:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.140 18:15:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.140 BaseBdev4_malloc 00:17:07.140 18:15:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.140 18:15:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:07.140 18:15:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.140 18:15:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.140 [2024-12-06 18:15:32.622556] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:07.140 [2024-12-06 18:15:32.622629] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.140 [2024-12-06 18:15:32.622657] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:07.140 [2024-12-06 18:15:32.622675] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.140 [2024-12-06 18:15:32.625474] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.140 [2024-12-06 18:15:32.625524] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:07.140 BaseBdev4 00:17:07.140 18:15:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.140 18:15:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:07.140 18:15:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.140 18:15:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.398 spare_malloc 00:17:07.399 18:15:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.399 18:15:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:07.399 18:15:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.399 18:15:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.399 spare_delay 00:17:07.399 18:15:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.399 18:15:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:07.399 18:15:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.399 18:15:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.399 [2024-12-06 18:15:32.685527] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:07.399 [2024-12-06 18:15:32.685590] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.399 [2024-12-06 18:15:32.685616] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:07.399 [2024-12-06 18:15:32.685633] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.399 [2024-12-06 18:15:32.688507] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.399 [2024-12-06 18:15:32.688556] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:07.399 spare 00:17:07.399 18:15:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.399 18:15:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:07.399 18:15:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.399 18:15:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.399 [2024-12-06 18:15:32.693583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:07.399 [2024-12-06 18:15:32.696065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:07.399 [2024-12-06 18:15:32.696157] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:07.399 [2024-12-06 18:15:32.696238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:07.399 [2024-12-06 18:15:32.696496] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:07.399 [2024-12-06 18:15:32.696531] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:07.399 [2024-12-06 18:15:32.696892] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:07.399 [2024-12-06 18:15:32.697139] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:07.399 [2024-12-06 18:15:32.697165] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:07.399 [2024-12-06 18:15:32.697346] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:07.399 18:15:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.399 18:15:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:07.399 18:15:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:07.399 18:15:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:07.399 18:15:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:07.399 18:15:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:07.399 18:15:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:07.399 18:15:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.399 18:15:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.399 18:15:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.399 18:15:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.399 18:15:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.399 18:15:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.399 18:15:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.399 18:15:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.399 18:15:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.399 18:15:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.399 "name": "raid_bdev1", 00:17:07.399 "uuid": "57ef8b12-04a3-415a-b567-e41889b47626", 00:17:07.399 "strip_size_kb": 0, 00:17:07.399 "state": "online", 00:17:07.399 "raid_level": "raid1", 00:17:07.399 "superblock": true, 00:17:07.399 "num_base_bdevs": 4, 00:17:07.399 "num_base_bdevs_discovered": 4, 00:17:07.399 "num_base_bdevs_operational": 4, 00:17:07.399 "base_bdevs_list": [ 00:17:07.399 { 00:17:07.399 "name": "BaseBdev1", 00:17:07.399 "uuid": "1bde6ba4-531b-5283-8b26-d8008055aacb", 00:17:07.399 "is_configured": true, 00:17:07.399 "data_offset": 2048, 00:17:07.399 "data_size": 63488 00:17:07.399 }, 00:17:07.399 { 00:17:07.399 "name": "BaseBdev2", 00:17:07.399 "uuid": "95f6178f-c7a7-5c2e-9770-7cedd11dd19a", 00:17:07.399 "is_configured": true, 00:17:07.399 "data_offset": 2048, 00:17:07.399 "data_size": 63488 00:17:07.399 }, 00:17:07.399 { 00:17:07.399 "name": "BaseBdev3", 00:17:07.399 "uuid": "33d61f68-4566-5c85-b17b-f9e89cf2c1f2", 00:17:07.399 "is_configured": true, 00:17:07.399 "data_offset": 2048, 00:17:07.399 "data_size": 63488 00:17:07.399 }, 00:17:07.399 { 00:17:07.399 "name": "BaseBdev4", 00:17:07.399 "uuid": "16d1866f-1a76-5790-b957-b5deb5c249cc", 00:17:07.399 "is_configured": true, 00:17:07.399 "data_offset": 2048, 00:17:07.399 "data_size": 63488 00:17:07.399 } 00:17:07.399 ] 00:17:07.399 }' 00:17:07.399 18:15:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.399 18:15:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.966 18:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:07.966 18:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:07.966 18:15:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.966 18:15:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.966 [2024-12-06 18:15:33.194173] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:07.966 18:15:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.966 18:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:17:07.966 18:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:07.966 18:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.966 18:15:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.966 18:15:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.966 18:15:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.966 18:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:07.966 18:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:07.966 18:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:07.966 18:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:07.966 18:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:07.966 18:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:07.966 18:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:07.966 18:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:07.966 18:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:07.966 18:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:07.966 18:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:07.966 18:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:07.966 18:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:07.966 18:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:08.225 [2024-12-06 18:15:33.589954] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:08.225 /dev/nbd0 00:17:08.225 18:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:08.225 18:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:08.225 18:15:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:08.225 18:15:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:08.225 18:15:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:08.225 18:15:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:08.225 18:15:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:08.225 18:15:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:08.225 18:15:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:08.225 18:15:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:08.225 18:15:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:08.225 1+0 records in 00:17:08.225 1+0 records out 00:17:08.225 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000383006 s, 10.7 MB/s 00:17:08.225 18:15:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:08.225 18:15:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:08.225 18:15:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:08.225 18:15:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:08.225 18:15:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:08.225 18:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:08.225 18:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:08.225 18:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:08.226 18:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:08.226 18:15:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:17:18.199 63488+0 records in 00:17:18.199 63488+0 records out 00:17:18.199 32505856 bytes (33 MB, 31 MiB) copied, 8.43927 s, 3.9 MB/s 00:17:18.199 18:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:18.199 18:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:18.199 18:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:18.199 18:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:18.199 18:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:18.199 18:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:18.199 18:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:18.199 [2024-12-06 18:15:42.346419] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:18.199 18:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:18.199 18:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:18.199 18:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:18.199 18:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:18.199 18:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:18.199 18:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:18.199 18:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:18.199 18:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:18.199 18:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:18.199 18:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.199 18:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.199 [2024-12-06 18:15:42.378519] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:18.199 18:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.199 18:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:18.199 18:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:18.199 18:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:18.199 18:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:18.199 18:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:18.199 18:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:18.199 18:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.199 18:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.199 18:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.199 18:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.199 18:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.199 18:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.199 18:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.199 18:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.199 18:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.199 18:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.199 "name": "raid_bdev1", 00:17:18.199 "uuid": "57ef8b12-04a3-415a-b567-e41889b47626", 00:17:18.199 "strip_size_kb": 0, 00:17:18.199 "state": "online", 00:17:18.199 "raid_level": "raid1", 00:17:18.199 "superblock": true, 00:17:18.199 "num_base_bdevs": 4, 00:17:18.199 "num_base_bdevs_discovered": 3, 00:17:18.199 "num_base_bdevs_operational": 3, 00:17:18.199 "base_bdevs_list": [ 00:17:18.199 { 00:17:18.199 "name": null, 00:17:18.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.199 "is_configured": false, 00:17:18.199 "data_offset": 0, 00:17:18.199 "data_size": 63488 00:17:18.199 }, 00:17:18.199 { 00:17:18.199 "name": "BaseBdev2", 00:17:18.199 "uuid": "95f6178f-c7a7-5c2e-9770-7cedd11dd19a", 00:17:18.199 "is_configured": true, 00:17:18.199 "data_offset": 2048, 00:17:18.199 "data_size": 63488 00:17:18.199 }, 00:17:18.199 { 00:17:18.199 "name": "BaseBdev3", 00:17:18.199 "uuid": "33d61f68-4566-5c85-b17b-f9e89cf2c1f2", 00:17:18.199 "is_configured": true, 00:17:18.199 "data_offset": 2048, 00:17:18.199 "data_size": 63488 00:17:18.199 }, 00:17:18.199 { 00:17:18.199 "name": "BaseBdev4", 00:17:18.199 "uuid": "16d1866f-1a76-5790-b957-b5deb5c249cc", 00:17:18.199 "is_configured": true, 00:17:18.199 "data_offset": 2048, 00:17:18.199 "data_size": 63488 00:17:18.199 } 00:17:18.199 ] 00:17:18.199 }' 00:17:18.199 18:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.199 18:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.199 18:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:18.199 18:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.199 18:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.199 [2024-12-06 18:15:42.922688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:18.199 [2024-12-06 18:15:42.938147] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:17:18.199 18:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.199 18:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:18.200 [2024-12-06 18:15:42.940707] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:18.458 18:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:18.458 18:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:18.458 18:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:18.458 18:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:18.458 18:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:18.458 18:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.458 18:15:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.458 18:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.458 18:15:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.458 18:15:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.717 18:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:18.717 "name": "raid_bdev1", 00:17:18.717 "uuid": "57ef8b12-04a3-415a-b567-e41889b47626", 00:17:18.717 "strip_size_kb": 0, 00:17:18.717 "state": "online", 00:17:18.717 "raid_level": "raid1", 00:17:18.717 "superblock": true, 00:17:18.717 "num_base_bdevs": 4, 00:17:18.717 "num_base_bdevs_discovered": 4, 00:17:18.717 "num_base_bdevs_operational": 4, 00:17:18.717 "process": { 00:17:18.717 "type": "rebuild", 00:17:18.717 "target": "spare", 00:17:18.717 "progress": { 00:17:18.717 "blocks": 20480, 00:17:18.717 "percent": 32 00:17:18.717 } 00:17:18.717 }, 00:17:18.717 "base_bdevs_list": [ 00:17:18.717 { 00:17:18.717 "name": "spare", 00:17:18.717 "uuid": "1f83e985-c3a3-52c5-ac32-965b3100d15c", 00:17:18.717 "is_configured": true, 00:17:18.717 "data_offset": 2048, 00:17:18.717 "data_size": 63488 00:17:18.717 }, 00:17:18.717 { 00:17:18.717 "name": "BaseBdev2", 00:17:18.717 "uuid": "95f6178f-c7a7-5c2e-9770-7cedd11dd19a", 00:17:18.717 "is_configured": true, 00:17:18.717 "data_offset": 2048, 00:17:18.717 "data_size": 63488 00:17:18.717 }, 00:17:18.717 { 00:17:18.717 "name": "BaseBdev3", 00:17:18.717 "uuid": "33d61f68-4566-5c85-b17b-f9e89cf2c1f2", 00:17:18.717 "is_configured": true, 00:17:18.717 "data_offset": 2048, 00:17:18.717 "data_size": 63488 00:17:18.717 }, 00:17:18.717 { 00:17:18.717 "name": "BaseBdev4", 00:17:18.717 "uuid": "16d1866f-1a76-5790-b957-b5deb5c249cc", 00:17:18.717 "is_configured": true, 00:17:18.717 "data_offset": 2048, 00:17:18.717 "data_size": 63488 00:17:18.717 } 00:17:18.717 ] 00:17:18.717 }' 00:17:18.717 18:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:18.717 18:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:18.717 18:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:18.717 18:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:18.717 18:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:18.717 18:15:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.717 18:15:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.717 [2024-12-06 18:15:44.102553] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:18.717 [2024-12-06 18:15:44.149811] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:18.717 [2024-12-06 18:15:44.149934] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:18.717 [2024-12-06 18:15:44.149960] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:18.717 [2024-12-06 18:15:44.149974] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:18.717 18:15:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.717 18:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:18.717 18:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:18.717 18:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:18.717 18:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:18.717 18:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:18.717 18:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:18.717 18:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.717 18:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.717 18:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.717 18:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.717 18:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.717 18:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.717 18:15:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.717 18:15:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.717 18:15:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.717 18:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.717 "name": "raid_bdev1", 00:17:18.717 "uuid": "57ef8b12-04a3-415a-b567-e41889b47626", 00:17:18.717 "strip_size_kb": 0, 00:17:18.717 "state": "online", 00:17:18.717 "raid_level": "raid1", 00:17:18.717 "superblock": true, 00:17:18.717 "num_base_bdevs": 4, 00:17:18.717 "num_base_bdevs_discovered": 3, 00:17:18.717 "num_base_bdevs_operational": 3, 00:17:18.717 "base_bdevs_list": [ 00:17:18.717 { 00:17:18.717 "name": null, 00:17:18.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.717 "is_configured": false, 00:17:18.717 "data_offset": 0, 00:17:18.717 "data_size": 63488 00:17:18.717 }, 00:17:18.717 { 00:17:18.717 "name": "BaseBdev2", 00:17:18.717 "uuid": "95f6178f-c7a7-5c2e-9770-7cedd11dd19a", 00:17:18.717 "is_configured": true, 00:17:18.717 "data_offset": 2048, 00:17:18.717 "data_size": 63488 00:17:18.717 }, 00:17:18.717 { 00:17:18.717 "name": "BaseBdev3", 00:17:18.717 "uuid": "33d61f68-4566-5c85-b17b-f9e89cf2c1f2", 00:17:18.717 "is_configured": true, 00:17:18.717 "data_offset": 2048, 00:17:18.717 "data_size": 63488 00:17:18.717 }, 00:17:18.717 { 00:17:18.717 "name": "BaseBdev4", 00:17:18.717 "uuid": "16d1866f-1a76-5790-b957-b5deb5c249cc", 00:17:18.717 "is_configured": true, 00:17:18.717 "data_offset": 2048, 00:17:18.717 "data_size": 63488 00:17:18.717 } 00:17:18.717 ] 00:17:18.717 }' 00:17:18.717 18:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.717 18:15:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.309 18:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:19.309 18:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.309 18:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:19.309 18:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:19.309 18:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.309 18:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.309 18:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.309 18:15:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.309 18:15:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.309 18:15:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.309 18:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.309 "name": "raid_bdev1", 00:17:19.309 "uuid": "57ef8b12-04a3-415a-b567-e41889b47626", 00:17:19.309 "strip_size_kb": 0, 00:17:19.309 "state": "online", 00:17:19.309 "raid_level": "raid1", 00:17:19.309 "superblock": true, 00:17:19.309 "num_base_bdevs": 4, 00:17:19.309 "num_base_bdevs_discovered": 3, 00:17:19.309 "num_base_bdevs_operational": 3, 00:17:19.309 "base_bdevs_list": [ 00:17:19.309 { 00:17:19.309 "name": null, 00:17:19.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.309 "is_configured": false, 00:17:19.309 "data_offset": 0, 00:17:19.309 "data_size": 63488 00:17:19.309 }, 00:17:19.309 { 00:17:19.309 "name": "BaseBdev2", 00:17:19.309 "uuid": "95f6178f-c7a7-5c2e-9770-7cedd11dd19a", 00:17:19.309 "is_configured": true, 00:17:19.309 "data_offset": 2048, 00:17:19.309 "data_size": 63488 00:17:19.309 }, 00:17:19.309 { 00:17:19.309 "name": "BaseBdev3", 00:17:19.309 "uuid": "33d61f68-4566-5c85-b17b-f9e89cf2c1f2", 00:17:19.309 "is_configured": true, 00:17:19.309 "data_offset": 2048, 00:17:19.309 "data_size": 63488 00:17:19.309 }, 00:17:19.309 { 00:17:19.309 "name": "BaseBdev4", 00:17:19.309 "uuid": "16d1866f-1a76-5790-b957-b5deb5c249cc", 00:17:19.309 "is_configured": true, 00:17:19.309 "data_offset": 2048, 00:17:19.309 "data_size": 63488 00:17:19.309 } 00:17:19.309 ] 00:17:19.309 }' 00:17:19.309 18:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.309 18:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:19.309 18:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.567 18:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:19.567 18:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:19.567 18:15:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.567 18:15:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.567 [2024-12-06 18:15:44.835029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:19.567 [2024-12-06 18:15:44.849364] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:17:19.567 18:15:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.567 18:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:19.567 [2024-12-06 18:15:44.851948] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:20.499 18:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:20.499 18:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.499 18:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:20.499 18:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:20.499 18:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.499 18:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.499 18:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.499 18:15:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.499 18:15:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.499 18:15:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.499 18:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.499 "name": "raid_bdev1", 00:17:20.499 "uuid": "57ef8b12-04a3-415a-b567-e41889b47626", 00:17:20.499 "strip_size_kb": 0, 00:17:20.499 "state": "online", 00:17:20.499 "raid_level": "raid1", 00:17:20.499 "superblock": true, 00:17:20.499 "num_base_bdevs": 4, 00:17:20.499 "num_base_bdevs_discovered": 4, 00:17:20.499 "num_base_bdevs_operational": 4, 00:17:20.499 "process": { 00:17:20.499 "type": "rebuild", 00:17:20.499 "target": "spare", 00:17:20.499 "progress": { 00:17:20.499 "blocks": 20480, 00:17:20.499 "percent": 32 00:17:20.499 } 00:17:20.499 }, 00:17:20.499 "base_bdevs_list": [ 00:17:20.499 { 00:17:20.499 "name": "spare", 00:17:20.499 "uuid": "1f83e985-c3a3-52c5-ac32-965b3100d15c", 00:17:20.499 "is_configured": true, 00:17:20.499 "data_offset": 2048, 00:17:20.499 "data_size": 63488 00:17:20.499 }, 00:17:20.499 { 00:17:20.499 "name": "BaseBdev2", 00:17:20.499 "uuid": "95f6178f-c7a7-5c2e-9770-7cedd11dd19a", 00:17:20.499 "is_configured": true, 00:17:20.499 "data_offset": 2048, 00:17:20.499 "data_size": 63488 00:17:20.499 }, 00:17:20.499 { 00:17:20.499 "name": "BaseBdev3", 00:17:20.499 "uuid": "33d61f68-4566-5c85-b17b-f9e89cf2c1f2", 00:17:20.499 "is_configured": true, 00:17:20.499 "data_offset": 2048, 00:17:20.499 "data_size": 63488 00:17:20.499 }, 00:17:20.499 { 00:17:20.499 "name": "BaseBdev4", 00:17:20.499 "uuid": "16d1866f-1a76-5790-b957-b5deb5c249cc", 00:17:20.499 "is_configured": true, 00:17:20.499 "data_offset": 2048, 00:17:20.499 "data_size": 63488 00:17:20.499 } 00:17:20.499 ] 00:17:20.499 }' 00:17:20.499 18:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.499 18:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:20.499 18:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.499 18:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:20.757 18:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:20.757 18:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:20.757 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:20.757 18:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:20.757 18:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:20.757 18:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:17:20.757 18:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:20.757 18:15:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.757 18:15:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.757 [2024-12-06 18:15:46.020981] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:20.757 [2024-12-06 18:15:46.161153] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:17:20.757 18:15:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.757 18:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:17:20.757 18:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:17:20.757 18:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:20.757 18:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.757 18:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:20.757 18:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:20.757 18:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.757 18:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.757 18:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.757 18:15:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.757 18:15:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.757 18:15:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.757 18:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.757 "name": "raid_bdev1", 00:17:20.757 "uuid": "57ef8b12-04a3-415a-b567-e41889b47626", 00:17:20.757 "strip_size_kb": 0, 00:17:20.757 "state": "online", 00:17:20.757 "raid_level": "raid1", 00:17:20.757 "superblock": true, 00:17:20.757 "num_base_bdevs": 4, 00:17:20.757 "num_base_bdevs_discovered": 3, 00:17:20.757 "num_base_bdevs_operational": 3, 00:17:20.757 "process": { 00:17:20.757 "type": "rebuild", 00:17:20.757 "target": "spare", 00:17:20.757 "progress": { 00:17:20.757 "blocks": 24576, 00:17:20.757 "percent": 38 00:17:20.757 } 00:17:20.757 }, 00:17:20.757 "base_bdevs_list": [ 00:17:20.757 { 00:17:20.757 "name": "spare", 00:17:20.757 "uuid": "1f83e985-c3a3-52c5-ac32-965b3100d15c", 00:17:20.757 "is_configured": true, 00:17:20.757 "data_offset": 2048, 00:17:20.757 "data_size": 63488 00:17:20.757 }, 00:17:20.757 { 00:17:20.757 "name": null, 00:17:20.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.758 "is_configured": false, 00:17:20.758 "data_offset": 0, 00:17:20.758 "data_size": 63488 00:17:20.758 }, 00:17:20.758 { 00:17:20.758 "name": "BaseBdev3", 00:17:20.758 "uuid": "33d61f68-4566-5c85-b17b-f9e89cf2c1f2", 00:17:20.758 "is_configured": true, 00:17:20.758 "data_offset": 2048, 00:17:20.758 "data_size": 63488 00:17:20.758 }, 00:17:20.758 { 00:17:20.758 "name": "BaseBdev4", 00:17:20.758 "uuid": "16d1866f-1a76-5790-b957-b5deb5c249cc", 00:17:20.758 "is_configured": true, 00:17:20.758 "data_offset": 2048, 00:17:20.758 "data_size": 63488 00:17:20.758 } 00:17:20.758 ] 00:17:20.758 }' 00:17:20.758 18:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.758 18:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:20.758 18:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:21.016 18:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:21.016 18:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=502 00:17:21.016 18:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:21.016 18:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:21.016 18:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:21.016 18:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:21.016 18:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:21.016 18:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:21.016 18:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.016 18:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.016 18:15:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.016 18:15:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.016 18:15:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.016 18:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:21.016 "name": "raid_bdev1", 00:17:21.016 "uuid": "57ef8b12-04a3-415a-b567-e41889b47626", 00:17:21.016 "strip_size_kb": 0, 00:17:21.016 "state": "online", 00:17:21.016 "raid_level": "raid1", 00:17:21.016 "superblock": true, 00:17:21.016 "num_base_bdevs": 4, 00:17:21.016 "num_base_bdevs_discovered": 3, 00:17:21.016 "num_base_bdevs_operational": 3, 00:17:21.016 "process": { 00:17:21.016 "type": "rebuild", 00:17:21.016 "target": "spare", 00:17:21.016 "progress": { 00:17:21.016 "blocks": 26624, 00:17:21.016 "percent": 41 00:17:21.016 } 00:17:21.016 }, 00:17:21.016 "base_bdevs_list": [ 00:17:21.016 { 00:17:21.016 "name": "spare", 00:17:21.016 "uuid": "1f83e985-c3a3-52c5-ac32-965b3100d15c", 00:17:21.016 "is_configured": true, 00:17:21.016 "data_offset": 2048, 00:17:21.016 "data_size": 63488 00:17:21.016 }, 00:17:21.016 { 00:17:21.016 "name": null, 00:17:21.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.016 "is_configured": false, 00:17:21.016 "data_offset": 0, 00:17:21.016 "data_size": 63488 00:17:21.016 }, 00:17:21.016 { 00:17:21.016 "name": "BaseBdev3", 00:17:21.016 "uuid": "33d61f68-4566-5c85-b17b-f9e89cf2c1f2", 00:17:21.016 "is_configured": true, 00:17:21.016 "data_offset": 2048, 00:17:21.016 "data_size": 63488 00:17:21.016 }, 00:17:21.016 { 00:17:21.016 "name": "BaseBdev4", 00:17:21.016 "uuid": "16d1866f-1a76-5790-b957-b5deb5c249cc", 00:17:21.016 "is_configured": true, 00:17:21.016 "data_offset": 2048, 00:17:21.016 "data_size": 63488 00:17:21.016 } 00:17:21.016 ] 00:17:21.016 }' 00:17:21.016 18:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:21.016 18:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:21.016 18:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:21.016 18:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:21.016 18:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:22.395 18:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:22.395 18:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:22.395 18:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:22.395 18:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:22.395 18:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:22.395 18:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:22.395 18:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.395 18:15:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.395 18:15:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.395 18:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.395 18:15:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.395 18:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:22.395 "name": "raid_bdev1", 00:17:22.395 "uuid": "57ef8b12-04a3-415a-b567-e41889b47626", 00:17:22.395 "strip_size_kb": 0, 00:17:22.395 "state": "online", 00:17:22.395 "raid_level": "raid1", 00:17:22.395 "superblock": true, 00:17:22.395 "num_base_bdevs": 4, 00:17:22.395 "num_base_bdevs_discovered": 3, 00:17:22.395 "num_base_bdevs_operational": 3, 00:17:22.395 "process": { 00:17:22.395 "type": "rebuild", 00:17:22.395 "target": "spare", 00:17:22.395 "progress": { 00:17:22.395 "blocks": 51200, 00:17:22.395 "percent": 80 00:17:22.395 } 00:17:22.395 }, 00:17:22.395 "base_bdevs_list": [ 00:17:22.395 { 00:17:22.395 "name": "spare", 00:17:22.395 "uuid": "1f83e985-c3a3-52c5-ac32-965b3100d15c", 00:17:22.395 "is_configured": true, 00:17:22.395 "data_offset": 2048, 00:17:22.395 "data_size": 63488 00:17:22.395 }, 00:17:22.395 { 00:17:22.395 "name": null, 00:17:22.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.395 "is_configured": false, 00:17:22.395 "data_offset": 0, 00:17:22.395 "data_size": 63488 00:17:22.395 }, 00:17:22.395 { 00:17:22.395 "name": "BaseBdev3", 00:17:22.395 "uuid": "33d61f68-4566-5c85-b17b-f9e89cf2c1f2", 00:17:22.395 "is_configured": true, 00:17:22.395 "data_offset": 2048, 00:17:22.395 "data_size": 63488 00:17:22.395 }, 00:17:22.395 { 00:17:22.395 "name": "BaseBdev4", 00:17:22.395 "uuid": "16d1866f-1a76-5790-b957-b5deb5c249cc", 00:17:22.395 "is_configured": true, 00:17:22.395 "data_offset": 2048, 00:17:22.395 "data_size": 63488 00:17:22.395 } 00:17:22.395 ] 00:17:22.395 }' 00:17:22.395 18:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:22.395 18:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:22.395 18:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:22.395 18:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:22.395 18:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:22.654 [2024-12-06 18:15:48.075911] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:22.654 [2024-12-06 18:15:48.076012] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:22.654 [2024-12-06 18:15:48.076174] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:23.221 18:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:23.221 18:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:23.221 18:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:23.221 18:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:23.221 18:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:23.221 18:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:23.221 18:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.221 18:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.221 18:15:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.221 18:15:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.221 18:15:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.221 18:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:23.221 "name": "raid_bdev1", 00:17:23.221 "uuid": "57ef8b12-04a3-415a-b567-e41889b47626", 00:17:23.221 "strip_size_kb": 0, 00:17:23.221 "state": "online", 00:17:23.221 "raid_level": "raid1", 00:17:23.221 "superblock": true, 00:17:23.221 "num_base_bdevs": 4, 00:17:23.221 "num_base_bdevs_discovered": 3, 00:17:23.221 "num_base_bdevs_operational": 3, 00:17:23.221 "base_bdevs_list": [ 00:17:23.221 { 00:17:23.221 "name": "spare", 00:17:23.221 "uuid": "1f83e985-c3a3-52c5-ac32-965b3100d15c", 00:17:23.221 "is_configured": true, 00:17:23.221 "data_offset": 2048, 00:17:23.221 "data_size": 63488 00:17:23.221 }, 00:17:23.221 { 00:17:23.221 "name": null, 00:17:23.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.221 "is_configured": false, 00:17:23.221 "data_offset": 0, 00:17:23.221 "data_size": 63488 00:17:23.221 }, 00:17:23.221 { 00:17:23.221 "name": "BaseBdev3", 00:17:23.221 "uuid": "33d61f68-4566-5c85-b17b-f9e89cf2c1f2", 00:17:23.221 "is_configured": true, 00:17:23.221 "data_offset": 2048, 00:17:23.221 "data_size": 63488 00:17:23.221 }, 00:17:23.221 { 00:17:23.221 "name": "BaseBdev4", 00:17:23.221 "uuid": "16d1866f-1a76-5790-b957-b5deb5c249cc", 00:17:23.221 "is_configured": true, 00:17:23.221 "data_offset": 2048, 00:17:23.221 "data_size": 63488 00:17:23.221 } 00:17:23.221 ] 00:17:23.221 }' 00:17:23.221 18:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:23.480 18:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:23.480 18:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:23.480 18:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:23.480 18:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:23.480 18:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:23.480 18:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:23.480 18:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:23.480 18:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:23.480 18:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:23.480 18:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.480 18:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.480 18:15:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.480 18:15:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.480 18:15:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.480 18:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:23.480 "name": "raid_bdev1", 00:17:23.480 "uuid": "57ef8b12-04a3-415a-b567-e41889b47626", 00:17:23.480 "strip_size_kb": 0, 00:17:23.480 "state": "online", 00:17:23.480 "raid_level": "raid1", 00:17:23.480 "superblock": true, 00:17:23.480 "num_base_bdevs": 4, 00:17:23.480 "num_base_bdevs_discovered": 3, 00:17:23.480 "num_base_bdevs_operational": 3, 00:17:23.480 "base_bdevs_list": [ 00:17:23.480 { 00:17:23.480 "name": "spare", 00:17:23.480 "uuid": "1f83e985-c3a3-52c5-ac32-965b3100d15c", 00:17:23.480 "is_configured": true, 00:17:23.480 "data_offset": 2048, 00:17:23.480 "data_size": 63488 00:17:23.480 }, 00:17:23.480 { 00:17:23.480 "name": null, 00:17:23.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.480 "is_configured": false, 00:17:23.480 "data_offset": 0, 00:17:23.480 "data_size": 63488 00:17:23.480 }, 00:17:23.480 { 00:17:23.480 "name": "BaseBdev3", 00:17:23.480 "uuid": "33d61f68-4566-5c85-b17b-f9e89cf2c1f2", 00:17:23.480 "is_configured": true, 00:17:23.480 "data_offset": 2048, 00:17:23.480 "data_size": 63488 00:17:23.480 }, 00:17:23.480 { 00:17:23.480 "name": "BaseBdev4", 00:17:23.480 "uuid": "16d1866f-1a76-5790-b957-b5deb5c249cc", 00:17:23.480 "is_configured": true, 00:17:23.480 "data_offset": 2048, 00:17:23.480 "data_size": 63488 00:17:23.480 } 00:17:23.480 ] 00:17:23.480 }' 00:17:23.480 18:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:23.480 18:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:23.480 18:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:23.480 18:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:23.480 18:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:23.480 18:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:23.480 18:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:23.480 18:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:23.480 18:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:23.480 18:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:23.480 18:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.480 18:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.480 18:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.480 18:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.480 18:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.480 18:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.480 18:15:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.480 18:15:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.481 18:15:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.740 18:15:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.740 "name": "raid_bdev1", 00:17:23.740 "uuid": "57ef8b12-04a3-415a-b567-e41889b47626", 00:17:23.740 "strip_size_kb": 0, 00:17:23.740 "state": "online", 00:17:23.740 "raid_level": "raid1", 00:17:23.740 "superblock": true, 00:17:23.740 "num_base_bdevs": 4, 00:17:23.740 "num_base_bdevs_discovered": 3, 00:17:23.740 "num_base_bdevs_operational": 3, 00:17:23.740 "base_bdevs_list": [ 00:17:23.740 { 00:17:23.740 "name": "spare", 00:17:23.740 "uuid": "1f83e985-c3a3-52c5-ac32-965b3100d15c", 00:17:23.740 "is_configured": true, 00:17:23.740 "data_offset": 2048, 00:17:23.740 "data_size": 63488 00:17:23.740 }, 00:17:23.740 { 00:17:23.740 "name": null, 00:17:23.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.740 "is_configured": false, 00:17:23.740 "data_offset": 0, 00:17:23.740 "data_size": 63488 00:17:23.740 }, 00:17:23.740 { 00:17:23.740 "name": "BaseBdev3", 00:17:23.740 "uuid": "33d61f68-4566-5c85-b17b-f9e89cf2c1f2", 00:17:23.740 "is_configured": true, 00:17:23.740 "data_offset": 2048, 00:17:23.740 "data_size": 63488 00:17:23.740 }, 00:17:23.740 { 00:17:23.740 "name": "BaseBdev4", 00:17:23.740 "uuid": "16d1866f-1a76-5790-b957-b5deb5c249cc", 00:17:23.740 "is_configured": true, 00:17:23.740 "data_offset": 2048, 00:17:23.740 "data_size": 63488 00:17:23.740 } 00:17:23.740 ] 00:17:23.740 }' 00:17:23.740 18:15:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.740 18:15:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.999 18:15:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:23.999 18:15:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.999 18:15:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.999 [2024-12-06 18:15:49.501190] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:23.999 [2024-12-06 18:15:49.501258] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:23.999 [2024-12-06 18:15:49.501386] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:23.999 [2024-12-06 18:15:49.501503] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:23.999 [2024-12-06 18:15:49.501519] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:23.999 18:15:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.000 18:15:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:24.000 18:15:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.000 18:15:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.000 18:15:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.288 18:15:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.288 18:15:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:24.288 18:15:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:24.288 18:15:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:24.288 18:15:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:24.288 18:15:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:24.288 18:15:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:24.288 18:15:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:24.288 18:15:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:24.288 18:15:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:24.288 18:15:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:24.288 18:15:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:24.288 18:15:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:24.288 18:15:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:24.546 /dev/nbd0 00:17:24.546 18:15:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:24.546 18:15:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:24.546 18:15:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:24.546 18:15:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:24.546 18:15:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:24.546 18:15:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:24.547 18:15:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:24.547 18:15:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:24.547 18:15:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:24.547 18:15:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:24.547 18:15:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:24.547 1+0 records in 00:17:24.547 1+0 records out 00:17:24.547 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274315 s, 14.9 MB/s 00:17:24.547 18:15:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:24.547 18:15:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:24.547 18:15:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:24.547 18:15:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:24.547 18:15:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:24.547 18:15:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:24.547 18:15:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:24.547 18:15:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:24.805 /dev/nbd1 00:17:24.805 18:15:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:24.805 18:15:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:24.805 18:15:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:24.805 18:15:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:24.805 18:15:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:24.805 18:15:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:24.805 18:15:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:24.805 18:15:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:24.805 18:15:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:24.805 18:15:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:24.805 18:15:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:24.805 1+0 records in 00:17:24.805 1+0 records out 00:17:24.805 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381563 s, 10.7 MB/s 00:17:24.805 18:15:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:24.805 18:15:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:24.805 18:15:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:24.805 18:15:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:24.805 18:15:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:24.805 18:15:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:24.805 18:15:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:24.805 18:15:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:25.063 18:15:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:25.063 18:15:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:25.063 18:15:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:25.063 18:15:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:25.063 18:15:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:25.063 18:15:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:25.063 18:15:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:25.322 18:15:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:25.322 18:15:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:25.322 18:15:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:25.322 18:15:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:25.322 18:15:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:25.322 18:15:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:25.322 18:15:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:25.322 18:15:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:25.322 18:15:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:25.322 18:15:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:25.580 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:25.580 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:25.580 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:25.580 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:25.580 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:25.580 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:25.580 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:25.580 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:25.580 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:25.580 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:25.580 18:15:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.580 18:15:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.580 18:15:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.580 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:25.580 18:15:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.580 18:15:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.580 [2024-12-06 18:15:51.024998] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:25.580 [2024-12-06 18:15:51.025058] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.580 [2024-12-06 18:15:51.025090] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:25.580 [2024-12-06 18:15:51.025105] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.580 [2024-12-06 18:15:51.028092] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.580 [2024-12-06 18:15:51.028138] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:25.580 [2024-12-06 18:15:51.028263] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:25.580 [2024-12-06 18:15:51.028340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:25.580 [2024-12-06 18:15:51.028528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:25.580 [2024-12-06 18:15:51.028670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:25.580 spare 00:17:25.580 18:15:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.580 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:25.580 18:15:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.580 18:15:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.839 [2024-12-06 18:15:51.128816] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:25.839 [2024-12-06 18:15:51.128844] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:25.839 [2024-12-06 18:15:51.129197] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:17:25.839 [2024-12-06 18:15:51.129434] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:25.839 [2024-12-06 18:15:51.129466] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:25.839 [2024-12-06 18:15:51.129684] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:25.839 18:15:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.839 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:25.839 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:25.839 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:25.839 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:25.839 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:25.839 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:25.839 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.839 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.839 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.839 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.839 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.839 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.839 18:15:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.839 18:15:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.839 18:15:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.839 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.839 "name": "raid_bdev1", 00:17:25.839 "uuid": "57ef8b12-04a3-415a-b567-e41889b47626", 00:17:25.839 "strip_size_kb": 0, 00:17:25.839 "state": "online", 00:17:25.839 "raid_level": "raid1", 00:17:25.839 "superblock": true, 00:17:25.839 "num_base_bdevs": 4, 00:17:25.839 "num_base_bdevs_discovered": 3, 00:17:25.839 "num_base_bdevs_operational": 3, 00:17:25.839 "base_bdevs_list": [ 00:17:25.839 { 00:17:25.839 "name": "spare", 00:17:25.839 "uuid": "1f83e985-c3a3-52c5-ac32-965b3100d15c", 00:17:25.839 "is_configured": true, 00:17:25.839 "data_offset": 2048, 00:17:25.839 "data_size": 63488 00:17:25.839 }, 00:17:25.839 { 00:17:25.839 "name": null, 00:17:25.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.839 "is_configured": false, 00:17:25.839 "data_offset": 2048, 00:17:25.839 "data_size": 63488 00:17:25.839 }, 00:17:25.839 { 00:17:25.839 "name": "BaseBdev3", 00:17:25.839 "uuid": "33d61f68-4566-5c85-b17b-f9e89cf2c1f2", 00:17:25.839 "is_configured": true, 00:17:25.839 "data_offset": 2048, 00:17:25.839 "data_size": 63488 00:17:25.839 }, 00:17:25.839 { 00:17:25.839 "name": "BaseBdev4", 00:17:25.839 "uuid": "16d1866f-1a76-5790-b957-b5deb5c249cc", 00:17:25.839 "is_configured": true, 00:17:25.839 "data_offset": 2048, 00:17:25.839 "data_size": 63488 00:17:25.839 } 00:17:25.839 ] 00:17:25.839 }' 00:17:25.839 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.839 18:15:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.407 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:26.407 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:26.407 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:26.407 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:26.407 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:26.407 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.407 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.407 18:15:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.407 18:15:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.407 18:15:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.407 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:26.407 "name": "raid_bdev1", 00:17:26.407 "uuid": "57ef8b12-04a3-415a-b567-e41889b47626", 00:17:26.407 "strip_size_kb": 0, 00:17:26.407 "state": "online", 00:17:26.407 "raid_level": "raid1", 00:17:26.407 "superblock": true, 00:17:26.407 "num_base_bdevs": 4, 00:17:26.407 "num_base_bdevs_discovered": 3, 00:17:26.407 "num_base_bdevs_operational": 3, 00:17:26.407 "base_bdevs_list": [ 00:17:26.407 { 00:17:26.407 "name": "spare", 00:17:26.407 "uuid": "1f83e985-c3a3-52c5-ac32-965b3100d15c", 00:17:26.407 "is_configured": true, 00:17:26.407 "data_offset": 2048, 00:17:26.407 "data_size": 63488 00:17:26.407 }, 00:17:26.407 { 00:17:26.407 "name": null, 00:17:26.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.407 "is_configured": false, 00:17:26.407 "data_offset": 2048, 00:17:26.407 "data_size": 63488 00:17:26.407 }, 00:17:26.407 { 00:17:26.407 "name": "BaseBdev3", 00:17:26.407 "uuid": "33d61f68-4566-5c85-b17b-f9e89cf2c1f2", 00:17:26.407 "is_configured": true, 00:17:26.407 "data_offset": 2048, 00:17:26.407 "data_size": 63488 00:17:26.407 }, 00:17:26.407 { 00:17:26.407 "name": "BaseBdev4", 00:17:26.407 "uuid": "16d1866f-1a76-5790-b957-b5deb5c249cc", 00:17:26.407 "is_configured": true, 00:17:26.407 "data_offset": 2048, 00:17:26.407 "data_size": 63488 00:17:26.407 } 00:17:26.407 ] 00:17:26.407 }' 00:17:26.407 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:26.407 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:26.407 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:26.407 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:26.407 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.407 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:26.407 18:15:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.407 18:15:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.407 18:15:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.407 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:26.408 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:26.408 18:15:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.408 18:15:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.408 [2024-12-06 18:15:51.889947] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:26.408 18:15:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.408 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:26.408 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:26.408 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:26.408 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:26.408 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:26.408 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:26.408 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.408 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.408 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.408 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.408 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.408 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.408 18:15:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.408 18:15:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.408 18:15:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.666 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.666 "name": "raid_bdev1", 00:17:26.666 "uuid": "57ef8b12-04a3-415a-b567-e41889b47626", 00:17:26.666 "strip_size_kb": 0, 00:17:26.666 "state": "online", 00:17:26.666 "raid_level": "raid1", 00:17:26.666 "superblock": true, 00:17:26.666 "num_base_bdevs": 4, 00:17:26.666 "num_base_bdevs_discovered": 2, 00:17:26.666 "num_base_bdevs_operational": 2, 00:17:26.666 "base_bdevs_list": [ 00:17:26.666 { 00:17:26.666 "name": null, 00:17:26.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.666 "is_configured": false, 00:17:26.666 "data_offset": 0, 00:17:26.666 "data_size": 63488 00:17:26.666 }, 00:17:26.666 { 00:17:26.666 "name": null, 00:17:26.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.666 "is_configured": false, 00:17:26.666 "data_offset": 2048, 00:17:26.666 "data_size": 63488 00:17:26.666 }, 00:17:26.666 { 00:17:26.666 "name": "BaseBdev3", 00:17:26.666 "uuid": "33d61f68-4566-5c85-b17b-f9e89cf2c1f2", 00:17:26.666 "is_configured": true, 00:17:26.666 "data_offset": 2048, 00:17:26.666 "data_size": 63488 00:17:26.666 }, 00:17:26.666 { 00:17:26.666 "name": "BaseBdev4", 00:17:26.666 "uuid": "16d1866f-1a76-5790-b957-b5deb5c249cc", 00:17:26.666 "is_configured": true, 00:17:26.666 "data_offset": 2048, 00:17:26.666 "data_size": 63488 00:17:26.666 } 00:17:26.666 ] 00:17:26.666 }' 00:17:26.666 18:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.666 18:15:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.926 18:15:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:26.926 18:15:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.926 18:15:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.926 [2024-12-06 18:15:52.390134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:26.926 [2024-12-06 18:15:52.390465] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:17:26.926 [2024-12-06 18:15:52.390522] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:26.926 [2024-12-06 18:15:52.390576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:26.926 [2024-12-06 18:15:52.405072] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:17:26.926 18:15:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.926 18:15:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:26.926 [2024-12-06 18:15:52.407627] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:28.303 18:15:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:28.303 18:15:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:28.304 18:15:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:28.304 18:15:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:28.304 18:15:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:28.304 18:15:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.304 18:15:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.304 18:15:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.304 18:15:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.304 18:15:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.304 18:15:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:28.304 "name": "raid_bdev1", 00:17:28.304 "uuid": "57ef8b12-04a3-415a-b567-e41889b47626", 00:17:28.304 "strip_size_kb": 0, 00:17:28.304 "state": "online", 00:17:28.304 "raid_level": "raid1", 00:17:28.304 "superblock": true, 00:17:28.304 "num_base_bdevs": 4, 00:17:28.304 "num_base_bdevs_discovered": 3, 00:17:28.304 "num_base_bdevs_operational": 3, 00:17:28.304 "process": { 00:17:28.304 "type": "rebuild", 00:17:28.304 "target": "spare", 00:17:28.304 "progress": { 00:17:28.304 "blocks": 20480, 00:17:28.304 "percent": 32 00:17:28.304 } 00:17:28.304 }, 00:17:28.304 "base_bdevs_list": [ 00:17:28.304 { 00:17:28.304 "name": "spare", 00:17:28.304 "uuid": "1f83e985-c3a3-52c5-ac32-965b3100d15c", 00:17:28.304 "is_configured": true, 00:17:28.304 "data_offset": 2048, 00:17:28.304 "data_size": 63488 00:17:28.304 }, 00:17:28.304 { 00:17:28.304 "name": null, 00:17:28.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.304 "is_configured": false, 00:17:28.304 "data_offset": 2048, 00:17:28.304 "data_size": 63488 00:17:28.304 }, 00:17:28.304 { 00:17:28.304 "name": "BaseBdev3", 00:17:28.304 "uuid": "33d61f68-4566-5c85-b17b-f9e89cf2c1f2", 00:17:28.304 "is_configured": true, 00:17:28.304 "data_offset": 2048, 00:17:28.304 "data_size": 63488 00:17:28.304 }, 00:17:28.304 { 00:17:28.304 "name": "BaseBdev4", 00:17:28.304 "uuid": "16d1866f-1a76-5790-b957-b5deb5c249cc", 00:17:28.304 "is_configured": true, 00:17:28.304 "data_offset": 2048, 00:17:28.304 "data_size": 63488 00:17:28.304 } 00:17:28.304 ] 00:17:28.304 }' 00:17:28.304 18:15:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:28.304 18:15:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:28.304 18:15:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:28.304 18:15:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:28.304 18:15:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:28.304 18:15:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.304 18:15:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.304 [2024-12-06 18:15:53.573031] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:28.304 [2024-12-06 18:15:53.616998] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:28.304 [2024-12-06 18:15:53.617103] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:28.304 [2024-12-06 18:15:53.617131] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:28.304 [2024-12-06 18:15:53.617142] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:28.304 18:15:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.304 18:15:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:28.304 18:15:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:28.304 18:15:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:28.304 18:15:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:28.304 18:15:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:28.304 18:15:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:28.304 18:15:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.304 18:15:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.304 18:15:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.304 18:15:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.304 18:15:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.304 18:15:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.304 18:15:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.304 18:15:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.304 18:15:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.304 18:15:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.304 "name": "raid_bdev1", 00:17:28.304 "uuid": "57ef8b12-04a3-415a-b567-e41889b47626", 00:17:28.304 "strip_size_kb": 0, 00:17:28.304 "state": "online", 00:17:28.304 "raid_level": "raid1", 00:17:28.304 "superblock": true, 00:17:28.304 "num_base_bdevs": 4, 00:17:28.304 "num_base_bdevs_discovered": 2, 00:17:28.304 "num_base_bdevs_operational": 2, 00:17:28.304 "base_bdevs_list": [ 00:17:28.304 { 00:17:28.304 "name": null, 00:17:28.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.304 "is_configured": false, 00:17:28.304 "data_offset": 0, 00:17:28.304 "data_size": 63488 00:17:28.304 }, 00:17:28.304 { 00:17:28.304 "name": null, 00:17:28.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.304 "is_configured": false, 00:17:28.304 "data_offset": 2048, 00:17:28.304 "data_size": 63488 00:17:28.304 }, 00:17:28.304 { 00:17:28.304 "name": "BaseBdev3", 00:17:28.304 "uuid": "33d61f68-4566-5c85-b17b-f9e89cf2c1f2", 00:17:28.304 "is_configured": true, 00:17:28.304 "data_offset": 2048, 00:17:28.304 "data_size": 63488 00:17:28.304 }, 00:17:28.304 { 00:17:28.304 "name": "BaseBdev4", 00:17:28.304 "uuid": "16d1866f-1a76-5790-b957-b5deb5c249cc", 00:17:28.304 "is_configured": true, 00:17:28.304 "data_offset": 2048, 00:17:28.304 "data_size": 63488 00:17:28.304 } 00:17:28.304 ] 00:17:28.304 }' 00:17:28.304 18:15:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.304 18:15:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.952 18:15:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:28.952 18:15:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.952 18:15:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.952 [2024-12-06 18:15:54.178355] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:28.952 [2024-12-06 18:15:54.178427] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:28.952 [2024-12-06 18:15:54.178469] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:17:28.952 [2024-12-06 18:15:54.178487] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:28.952 [2024-12-06 18:15:54.179139] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:28.952 [2024-12-06 18:15:54.179181] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:28.952 [2024-12-06 18:15:54.179310] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:28.952 [2024-12-06 18:15:54.179329] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:17:28.952 [2024-12-06 18:15:54.179348] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:28.952 [2024-12-06 18:15:54.179379] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:28.952 [2024-12-06 18:15:54.193324] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:17:28.952 spare 00:17:28.952 18:15:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.952 18:15:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:28.952 [2024-12-06 18:15:54.195825] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:29.890 18:15:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:29.890 18:15:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:29.890 18:15:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:29.890 18:15:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:29.890 18:15:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:29.890 18:15:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.890 18:15:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.890 18:15:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.890 18:15:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.890 18:15:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.890 18:15:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:29.890 "name": "raid_bdev1", 00:17:29.890 "uuid": "57ef8b12-04a3-415a-b567-e41889b47626", 00:17:29.890 "strip_size_kb": 0, 00:17:29.890 "state": "online", 00:17:29.890 "raid_level": "raid1", 00:17:29.890 "superblock": true, 00:17:29.890 "num_base_bdevs": 4, 00:17:29.890 "num_base_bdevs_discovered": 3, 00:17:29.890 "num_base_bdevs_operational": 3, 00:17:29.890 "process": { 00:17:29.890 "type": "rebuild", 00:17:29.890 "target": "spare", 00:17:29.890 "progress": { 00:17:29.890 "blocks": 20480, 00:17:29.890 "percent": 32 00:17:29.890 } 00:17:29.890 }, 00:17:29.890 "base_bdevs_list": [ 00:17:29.890 { 00:17:29.890 "name": "spare", 00:17:29.890 "uuid": "1f83e985-c3a3-52c5-ac32-965b3100d15c", 00:17:29.890 "is_configured": true, 00:17:29.890 "data_offset": 2048, 00:17:29.890 "data_size": 63488 00:17:29.890 }, 00:17:29.890 { 00:17:29.890 "name": null, 00:17:29.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.890 "is_configured": false, 00:17:29.890 "data_offset": 2048, 00:17:29.890 "data_size": 63488 00:17:29.890 }, 00:17:29.890 { 00:17:29.890 "name": "BaseBdev3", 00:17:29.890 "uuid": "33d61f68-4566-5c85-b17b-f9e89cf2c1f2", 00:17:29.890 "is_configured": true, 00:17:29.890 "data_offset": 2048, 00:17:29.890 "data_size": 63488 00:17:29.890 }, 00:17:29.890 { 00:17:29.890 "name": "BaseBdev4", 00:17:29.890 "uuid": "16d1866f-1a76-5790-b957-b5deb5c249cc", 00:17:29.890 "is_configured": true, 00:17:29.890 "data_offset": 2048, 00:17:29.890 "data_size": 63488 00:17:29.890 } 00:17:29.890 ] 00:17:29.890 }' 00:17:29.890 18:15:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:29.890 18:15:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:29.890 18:15:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:29.890 18:15:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:29.890 18:15:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:29.890 18:15:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.890 18:15:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.890 [2024-12-06 18:15:55.385337] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:29.890 [2024-12-06 18:15:55.405232] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:29.890 [2024-12-06 18:15:55.405316] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:29.890 [2024-12-06 18:15:55.405343] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:29.890 [2024-12-06 18:15:55.405357] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:30.148 18:15:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.148 18:15:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:30.148 18:15:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:30.148 18:15:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:30.148 18:15:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:30.148 18:15:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:30.148 18:15:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:30.148 18:15:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.148 18:15:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.148 18:15:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.148 18:15:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.148 18:15:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.148 18:15:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.148 18:15:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.148 18:15:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.148 18:15:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.148 18:15:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.148 "name": "raid_bdev1", 00:17:30.148 "uuid": "57ef8b12-04a3-415a-b567-e41889b47626", 00:17:30.148 "strip_size_kb": 0, 00:17:30.148 "state": "online", 00:17:30.148 "raid_level": "raid1", 00:17:30.148 "superblock": true, 00:17:30.148 "num_base_bdevs": 4, 00:17:30.148 "num_base_bdevs_discovered": 2, 00:17:30.148 "num_base_bdevs_operational": 2, 00:17:30.148 "base_bdevs_list": [ 00:17:30.148 { 00:17:30.148 "name": null, 00:17:30.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.148 "is_configured": false, 00:17:30.148 "data_offset": 0, 00:17:30.148 "data_size": 63488 00:17:30.148 }, 00:17:30.148 { 00:17:30.148 "name": null, 00:17:30.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.148 "is_configured": false, 00:17:30.148 "data_offset": 2048, 00:17:30.148 "data_size": 63488 00:17:30.148 }, 00:17:30.148 { 00:17:30.148 "name": "BaseBdev3", 00:17:30.148 "uuid": "33d61f68-4566-5c85-b17b-f9e89cf2c1f2", 00:17:30.148 "is_configured": true, 00:17:30.149 "data_offset": 2048, 00:17:30.149 "data_size": 63488 00:17:30.149 }, 00:17:30.149 { 00:17:30.149 "name": "BaseBdev4", 00:17:30.149 "uuid": "16d1866f-1a76-5790-b957-b5deb5c249cc", 00:17:30.149 "is_configured": true, 00:17:30.149 "data_offset": 2048, 00:17:30.149 "data_size": 63488 00:17:30.149 } 00:17:30.149 ] 00:17:30.149 }' 00:17:30.149 18:15:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.149 18:15:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.716 18:15:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:30.716 18:15:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:30.716 18:15:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:30.716 18:15:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:30.716 18:15:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:30.716 18:15:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.716 18:15:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.716 18:15:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.716 18:15:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.716 18:15:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.716 18:15:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:30.716 "name": "raid_bdev1", 00:17:30.716 "uuid": "57ef8b12-04a3-415a-b567-e41889b47626", 00:17:30.716 "strip_size_kb": 0, 00:17:30.716 "state": "online", 00:17:30.716 "raid_level": "raid1", 00:17:30.716 "superblock": true, 00:17:30.716 "num_base_bdevs": 4, 00:17:30.716 "num_base_bdevs_discovered": 2, 00:17:30.716 "num_base_bdevs_operational": 2, 00:17:30.716 "base_bdevs_list": [ 00:17:30.716 { 00:17:30.716 "name": null, 00:17:30.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.716 "is_configured": false, 00:17:30.716 "data_offset": 0, 00:17:30.716 "data_size": 63488 00:17:30.716 }, 00:17:30.716 { 00:17:30.716 "name": null, 00:17:30.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.716 "is_configured": false, 00:17:30.716 "data_offset": 2048, 00:17:30.716 "data_size": 63488 00:17:30.716 }, 00:17:30.716 { 00:17:30.716 "name": "BaseBdev3", 00:17:30.716 "uuid": "33d61f68-4566-5c85-b17b-f9e89cf2c1f2", 00:17:30.716 "is_configured": true, 00:17:30.716 "data_offset": 2048, 00:17:30.716 "data_size": 63488 00:17:30.716 }, 00:17:30.716 { 00:17:30.716 "name": "BaseBdev4", 00:17:30.716 "uuid": "16d1866f-1a76-5790-b957-b5deb5c249cc", 00:17:30.716 "is_configured": true, 00:17:30.716 "data_offset": 2048, 00:17:30.716 "data_size": 63488 00:17:30.716 } 00:17:30.716 ] 00:17:30.716 }' 00:17:30.716 18:15:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:30.716 18:15:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:30.716 18:15:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:30.716 18:15:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:30.716 18:15:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:30.716 18:15:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.716 18:15:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.716 18:15:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.716 18:15:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:30.716 18:15:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.716 18:15:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.716 [2024-12-06 18:15:56.097729] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:30.716 [2024-12-06 18:15:56.097806] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:30.716 [2024-12-06 18:15:56.097841] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:17:30.716 [2024-12-06 18:15:56.097857] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:30.716 [2024-12-06 18:15:56.098445] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:30.716 [2024-12-06 18:15:56.098483] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:30.716 [2024-12-06 18:15:56.098606] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:30.716 [2024-12-06 18:15:56.098632] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:17:30.716 [2024-12-06 18:15:56.098644] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:30.716 [2024-12-06 18:15:56.098673] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:30.716 BaseBdev1 00:17:30.716 18:15:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.716 18:15:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:31.653 18:15:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:31.653 18:15:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:31.653 18:15:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:31.653 18:15:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:31.653 18:15:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:31.653 18:15:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:31.653 18:15:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.653 18:15:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.653 18:15:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.653 18:15:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.653 18:15:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.653 18:15:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.653 18:15:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.653 18:15:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.653 18:15:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.653 18:15:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.653 "name": "raid_bdev1", 00:17:31.653 "uuid": "57ef8b12-04a3-415a-b567-e41889b47626", 00:17:31.653 "strip_size_kb": 0, 00:17:31.653 "state": "online", 00:17:31.653 "raid_level": "raid1", 00:17:31.653 "superblock": true, 00:17:31.653 "num_base_bdevs": 4, 00:17:31.653 "num_base_bdevs_discovered": 2, 00:17:31.653 "num_base_bdevs_operational": 2, 00:17:31.653 "base_bdevs_list": [ 00:17:31.653 { 00:17:31.653 "name": null, 00:17:31.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.653 "is_configured": false, 00:17:31.653 "data_offset": 0, 00:17:31.653 "data_size": 63488 00:17:31.653 }, 00:17:31.653 { 00:17:31.653 "name": null, 00:17:31.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.653 "is_configured": false, 00:17:31.653 "data_offset": 2048, 00:17:31.653 "data_size": 63488 00:17:31.653 }, 00:17:31.653 { 00:17:31.653 "name": "BaseBdev3", 00:17:31.653 "uuid": "33d61f68-4566-5c85-b17b-f9e89cf2c1f2", 00:17:31.653 "is_configured": true, 00:17:31.653 "data_offset": 2048, 00:17:31.653 "data_size": 63488 00:17:31.653 }, 00:17:31.653 { 00:17:31.653 "name": "BaseBdev4", 00:17:31.653 "uuid": "16d1866f-1a76-5790-b957-b5deb5c249cc", 00:17:31.653 "is_configured": true, 00:17:31.653 "data_offset": 2048, 00:17:31.653 "data_size": 63488 00:17:31.653 } 00:17:31.653 ] 00:17:31.653 }' 00:17:31.653 18:15:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.653 18:15:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.220 18:15:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:32.220 18:15:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:32.220 18:15:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:32.220 18:15:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:32.220 18:15:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:32.220 18:15:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.220 18:15:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.220 18:15:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.220 18:15:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.220 18:15:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.220 18:15:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:32.220 "name": "raid_bdev1", 00:17:32.220 "uuid": "57ef8b12-04a3-415a-b567-e41889b47626", 00:17:32.220 "strip_size_kb": 0, 00:17:32.220 "state": "online", 00:17:32.220 "raid_level": "raid1", 00:17:32.220 "superblock": true, 00:17:32.220 "num_base_bdevs": 4, 00:17:32.220 "num_base_bdevs_discovered": 2, 00:17:32.220 "num_base_bdevs_operational": 2, 00:17:32.220 "base_bdevs_list": [ 00:17:32.220 { 00:17:32.220 "name": null, 00:17:32.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.220 "is_configured": false, 00:17:32.220 "data_offset": 0, 00:17:32.220 "data_size": 63488 00:17:32.220 }, 00:17:32.220 { 00:17:32.220 "name": null, 00:17:32.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.220 "is_configured": false, 00:17:32.220 "data_offset": 2048, 00:17:32.220 "data_size": 63488 00:17:32.220 }, 00:17:32.220 { 00:17:32.220 "name": "BaseBdev3", 00:17:32.220 "uuid": "33d61f68-4566-5c85-b17b-f9e89cf2c1f2", 00:17:32.220 "is_configured": true, 00:17:32.220 "data_offset": 2048, 00:17:32.220 "data_size": 63488 00:17:32.220 }, 00:17:32.220 { 00:17:32.220 "name": "BaseBdev4", 00:17:32.220 "uuid": "16d1866f-1a76-5790-b957-b5deb5c249cc", 00:17:32.220 "is_configured": true, 00:17:32.220 "data_offset": 2048, 00:17:32.220 "data_size": 63488 00:17:32.220 } 00:17:32.220 ] 00:17:32.220 }' 00:17:32.220 18:15:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:32.479 18:15:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:32.479 18:15:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:32.479 18:15:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:32.479 18:15:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:32.479 18:15:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:17:32.479 18:15:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:32.479 18:15:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:32.479 18:15:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:32.479 18:15:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:32.479 18:15:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:32.479 18:15:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:32.479 18:15:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.479 18:15:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.479 [2024-12-06 18:15:57.826552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:32.479 [2024-12-06 18:15:57.826842] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:17:32.479 [2024-12-06 18:15:57.826873] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:32.479 request: 00:17:32.479 { 00:17:32.479 "base_bdev": "BaseBdev1", 00:17:32.479 "raid_bdev": "raid_bdev1", 00:17:32.479 "method": "bdev_raid_add_base_bdev", 00:17:32.479 "req_id": 1 00:17:32.479 } 00:17:32.479 Got JSON-RPC error response 00:17:32.479 response: 00:17:32.479 { 00:17:32.479 "code": -22, 00:17:32.479 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:32.479 } 00:17:32.479 18:15:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:32.479 18:15:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:17:32.479 18:15:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:32.479 18:15:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:32.479 18:15:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:32.479 18:15:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:33.414 18:15:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:33.414 18:15:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:33.414 18:15:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:33.414 18:15:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:33.414 18:15:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:33.414 18:15:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:33.414 18:15:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.414 18:15:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.414 18:15:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.414 18:15:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.414 18:15:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.414 18:15:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.414 18:15:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.414 18:15:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.414 18:15:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.414 18:15:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.414 "name": "raid_bdev1", 00:17:33.414 "uuid": "57ef8b12-04a3-415a-b567-e41889b47626", 00:17:33.414 "strip_size_kb": 0, 00:17:33.414 "state": "online", 00:17:33.414 "raid_level": "raid1", 00:17:33.414 "superblock": true, 00:17:33.414 "num_base_bdevs": 4, 00:17:33.414 "num_base_bdevs_discovered": 2, 00:17:33.414 "num_base_bdevs_operational": 2, 00:17:33.414 "base_bdevs_list": [ 00:17:33.414 { 00:17:33.414 "name": null, 00:17:33.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.414 "is_configured": false, 00:17:33.414 "data_offset": 0, 00:17:33.414 "data_size": 63488 00:17:33.414 }, 00:17:33.414 { 00:17:33.414 "name": null, 00:17:33.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.414 "is_configured": false, 00:17:33.414 "data_offset": 2048, 00:17:33.414 "data_size": 63488 00:17:33.414 }, 00:17:33.414 { 00:17:33.414 "name": "BaseBdev3", 00:17:33.414 "uuid": "33d61f68-4566-5c85-b17b-f9e89cf2c1f2", 00:17:33.414 "is_configured": true, 00:17:33.414 "data_offset": 2048, 00:17:33.414 "data_size": 63488 00:17:33.414 }, 00:17:33.414 { 00:17:33.414 "name": "BaseBdev4", 00:17:33.414 "uuid": "16d1866f-1a76-5790-b957-b5deb5c249cc", 00:17:33.414 "is_configured": true, 00:17:33.414 "data_offset": 2048, 00:17:33.414 "data_size": 63488 00:17:33.414 } 00:17:33.414 ] 00:17:33.414 }' 00:17:33.414 18:15:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.414 18:15:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.980 18:15:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:33.980 18:15:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:33.980 18:15:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:33.980 18:15:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:33.980 18:15:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:33.980 18:15:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.980 18:15:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.980 18:15:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.980 18:15:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.980 18:15:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.980 18:15:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:33.980 "name": "raid_bdev1", 00:17:33.980 "uuid": "57ef8b12-04a3-415a-b567-e41889b47626", 00:17:33.980 "strip_size_kb": 0, 00:17:33.980 "state": "online", 00:17:33.980 "raid_level": "raid1", 00:17:33.980 "superblock": true, 00:17:33.980 "num_base_bdevs": 4, 00:17:33.980 "num_base_bdevs_discovered": 2, 00:17:33.980 "num_base_bdevs_operational": 2, 00:17:33.980 "base_bdevs_list": [ 00:17:33.980 { 00:17:33.980 "name": null, 00:17:33.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.980 "is_configured": false, 00:17:33.980 "data_offset": 0, 00:17:33.980 "data_size": 63488 00:17:33.980 }, 00:17:33.980 { 00:17:33.980 "name": null, 00:17:33.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.980 "is_configured": false, 00:17:33.980 "data_offset": 2048, 00:17:33.980 "data_size": 63488 00:17:33.980 }, 00:17:33.980 { 00:17:33.980 "name": "BaseBdev3", 00:17:33.980 "uuid": "33d61f68-4566-5c85-b17b-f9e89cf2c1f2", 00:17:33.980 "is_configured": true, 00:17:33.980 "data_offset": 2048, 00:17:33.980 "data_size": 63488 00:17:33.980 }, 00:17:33.980 { 00:17:33.980 "name": "BaseBdev4", 00:17:33.980 "uuid": "16d1866f-1a76-5790-b957-b5deb5c249cc", 00:17:33.980 "is_configured": true, 00:17:33.980 "data_offset": 2048, 00:17:33.980 "data_size": 63488 00:17:33.980 } 00:17:33.980 ] 00:17:33.980 }' 00:17:33.980 18:15:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:34.239 18:15:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:34.239 18:15:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:34.239 18:15:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:34.239 18:15:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78312 00:17:34.239 18:15:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78312 ']' 00:17:34.239 18:15:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 78312 00:17:34.239 18:15:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:34.239 18:15:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:34.239 18:15:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78312 00:17:34.239 killing process with pid 78312 00:17:34.239 Received shutdown signal, test time was about 60.000000 seconds 00:17:34.239 00:17:34.239 Latency(us) 00:17:34.239 [2024-12-06T18:15:59.759Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:34.239 [2024-12-06T18:15:59.759Z] =================================================================================================================== 00:17:34.239 [2024-12-06T18:15:59.759Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:34.239 18:15:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:34.239 18:15:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:34.239 18:15:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78312' 00:17:34.239 18:15:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 78312 00:17:34.239 [2024-12-06 18:15:59.612237] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:34.239 18:15:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 78312 00:17:34.239 [2024-12-06 18:15:59.612421] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:34.239 [2024-12-06 18:15:59.612557] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:34.239 [2024-12-06 18:15:59.612588] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:34.805 [2024-12-06 18:16:00.057945] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:35.762 18:16:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:35.762 00:17:35.762 real 0m29.872s 00:17:35.762 user 0m36.175s 00:17:35.762 sys 0m4.105s 00:17:35.762 18:16:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:35.762 18:16:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.762 ************************************ 00:17:35.762 END TEST raid_rebuild_test_sb 00:17:35.762 ************************************ 00:17:35.762 18:16:01 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:17:35.762 18:16:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:35.762 18:16:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:35.762 18:16:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:35.762 ************************************ 00:17:35.762 START TEST raid_rebuild_test_io 00:17:35.762 ************************************ 00:17:35.762 18:16:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:17:35.762 18:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:35.762 18:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:35.762 18:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:35.762 18:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:17:35.762 18:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:35.762 18:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:35.762 18:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:35.762 18:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:35.762 18:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:35.762 18:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:35.762 18:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:35.762 18:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:35.762 18:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:35.762 18:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:35.762 18:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:35.762 18:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:35.762 18:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:35.762 18:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:35.762 18:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:35.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.762 18:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:35.762 18:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:35.762 18:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:35.762 18:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:35.762 18:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:35.762 18:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:35.762 18:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:35.762 18:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:35.762 18:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:35.762 18:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:35.762 18:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79114 00:17:35.762 18:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:35.762 18:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79114 00:17:35.762 18:16:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 79114 ']' 00:17:35.762 18:16:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.762 18:16:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:35.762 18:16:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.762 18:16:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:35.762 18:16:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.020 [2024-12-06 18:16:01.329439] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:17:36.020 [2024-12-06 18:16:01.329887] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79114 ] 00:17:36.020 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:36.020 Zero copy mechanism will not be used. 00:17:36.020 [2024-12-06 18:16:01.531023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.278 [2024-12-06 18:16:01.696518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.535 [2024-12-06 18:16:01.918137] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:36.535 [2024-12-06 18:16:01.918430] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:36.793 18:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:36.793 18:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:17:36.793 18:16:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:36.793 18:16:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:36.793 18:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.793 18:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.793 BaseBdev1_malloc 00:17:36.793 18:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.793 18:16:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:36.793 18:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.793 18:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.793 [2024-12-06 18:16:02.276220] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:36.793 [2024-12-06 18:16:02.276442] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.793 [2024-12-06 18:16:02.276579] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:36.793 [2024-12-06 18:16:02.276763] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.793 [2024-12-06 18:16:02.279839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.793 BaseBdev1 00:17:36.793 [2024-12-06 18:16:02.280061] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:36.793 18:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.793 18:16:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:36.793 18:16:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:36.793 18:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.793 18:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:37.050 BaseBdev2_malloc 00:17:37.050 18:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.050 18:16:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:37.050 18:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.050 18:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:37.050 [2024-12-06 18:16:02.329594] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:37.050 [2024-12-06 18:16:02.329674] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:37.050 [2024-12-06 18:16:02.329722] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:37.050 [2024-12-06 18:16:02.329741] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.050 [2024-12-06 18:16:02.332707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.050 [2024-12-06 18:16:02.332895] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:37.051 BaseBdev2 00:17:37.051 18:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.051 18:16:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:37.051 18:16:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:37.051 18:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.051 18:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:37.051 BaseBdev3_malloc 00:17:37.051 18:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.051 18:16:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:37.051 18:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.051 18:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:37.051 [2024-12-06 18:16:02.407880] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:37.051 [2024-12-06 18:16:02.408092] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:37.051 [2024-12-06 18:16:02.408169] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:37.051 [2024-12-06 18:16:02.408425] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.051 [2024-12-06 18:16:02.411292] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.051 [2024-12-06 18:16:02.411476] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:37.051 BaseBdev3 00:17:37.051 18:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.051 18:16:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:37.051 18:16:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:37.051 18:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.051 18:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:37.051 BaseBdev4_malloc 00:17:37.051 18:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.051 18:16:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:37.051 18:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.051 18:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:37.051 [2024-12-06 18:16:02.466013] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:37.051 [2024-12-06 18:16:02.466091] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:37.051 [2024-12-06 18:16:02.466124] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:37.051 [2024-12-06 18:16:02.466142] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.051 [2024-12-06 18:16:02.468917] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.051 [2024-12-06 18:16:02.468973] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:37.051 BaseBdev4 00:17:37.051 18:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.051 18:16:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:37.051 18:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.051 18:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:37.051 spare_malloc 00:17:37.051 18:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.051 18:16:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:37.051 18:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.051 18:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:37.051 spare_delay 00:17:37.051 18:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.051 18:16:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:37.051 18:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.051 18:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:37.051 [2024-12-06 18:16:02.526938] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:37.051 [2024-12-06 18:16:02.527026] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:37.051 [2024-12-06 18:16:02.527055] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:37.051 [2024-12-06 18:16:02.527073] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.051 [2024-12-06 18:16:02.529924] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.051 [2024-12-06 18:16:02.529975] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:37.051 spare 00:17:37.051 18:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.051 18:16:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:37.051 18:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.051 18:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:37.051 [2024-12-06 18:16:02.534981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:37.051 [2024-12-06 18:16:02.537716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:37.051 [2024-12-06 18:16:02.537959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:37.051 [2024-12-06 18:16:02.538094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:37.051 [2024-12-06 18:16:02.538363] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:37.051 [2024-12-06 18:16:02.538395] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:37.051 [2024-12-06 18:16:02.538751] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:37.051 [2024-12-06 18:16:02.539049] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:37.051 [2024-12-06 18:16:02.539070] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:37.051 [2024-12-06 18:16:02.539317] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:37.051 18:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.051 18:16:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:37.051 18:16:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:37.051 18:16:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:37.051 18:16:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:37.051 18:16:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:37.051 18:16:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:37.051 18:16:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.051 18:16:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.051 18:16:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.051 18:16:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.051 18:16:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.051 18:16:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.051 18:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.051 18:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:37.051 18:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.309 18:16:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.309 "name": "raid_bdev1", 00:17:37.309 "uuid": "d79c2fc9-4b0a-427c-b6be-f15e9d2faaec", 00:17:37.309 "strip_size_kb": 0, 00:17:37.309 "state": "online", 00:17:37.309 "raid_level": "raid1", 00:17:37.309 "superblock": false, 00:17:37.309 "num_base_bdevs": 4, 00:17:37.309 "num_base_bdevs_discovered": 4, 00:17:37.309 "num_base_bdevs_operational": 4, 00:17:37.309 "base_bdevs_list": [ 00:17:37.309 { 00:17:37.309 "name": "BaseBdev1", 00:17:37.309 "uuid": "921ac058-3c2b-5df6-99d2-372e249b92df", 00:17:37.309 "is_configured": true, 00:17:37.309 "data_offset": 0, 00:17:37.309 "data_size": 65536 00:17:37.309 }, 00:17:37.309 { 00:17:37.309 "name": "BaseBdev2", 00:17:37.309 "uuid": "8f2fde58-12ab-506d-bd1a-fc67ff928929", 00:17:37.309 "is_configured": true, 00:17:37.309 "data_offset": 0, 00:17:37.309 "data_size": 65536 00:17:37.309 }, 00:17:37.309 { 00:17:37.309 "name": "BaseBdev3", 00:17:37.309 "uuid": "a451cc86-e7c6-5f49-aaf8-f72d44d6ca0c", 00:17:37.309 "is_configured": true, 00:17:37.309 "data_offset": 0, 00:17:37.309 "data_size": 65536 00:17:37.309 }, 00:17:37.309 { 00:17:37.309 "name": "BaseBdev4", 00:17:37.309 "uuid": "9b584a8c-e0c6-58d7-bfb1-1c051fd5fed1", 00:17:37.309 "is_configured": true, 00:17:37.309 "data_offset": 0, 00:17:37.309 "data_size": 65536 00:17:37.309 } 00:17:37.309 ] 00:17:37.309 }' 00:17:37.309 18:16:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.309 18:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:37.567 18:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:37.567 18:16:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.567 18:16:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:37.567 18:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:37.567 [2024-12-06 18:16:03.068006] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:37.827 18:16:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.827 18:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:17:37.827 18:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.827 18:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:37.827 18:16:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.827 18:16:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:37.827 18:16:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.827 18:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:37.827 18:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:17:37.827 18:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:37.827 18:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:37.827 18:16:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.827 18:16:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:37.827 [2024-12-06 18:16:03.191561] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:37.827 18:16:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.827 18:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:37.827 18:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:37.827 18:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:37.827 18:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:37.827 18:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:37.827 18:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:37.827 18:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.827 18:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.827 18:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.827 18:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.827 18:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.827 18:16:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.827 18:16:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:37.827 18:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.827 18:16:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.827 18:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.827 "name": "raid_bdev1", 00:17:37.827 "uuid": "d79c2fc9-4b0a-427c-b6be-f15e9d2faaec", 00:17:37.827 "strip_size_kb": 0, 00:17:37.827 "state": "online", 00:17:37.827 "raid_level": "raid1", 00:17:37.827 "superblock": false, 00:17:37.827 "num_base_bdevs": 4, 00:17:37.827 "num_base_bdevs_discovered": 3, 00:17:37.827 "num_base_bdevs_operational": 3, 00:17:37.827 "base_bdevs_list": [ 00:17:37.827 { 00:17:37.827 "name": null, 00:17:37.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.827 "is_configured": false, 00:17:37.827 "data_offset": 0, 00:17:37.827 "data_size": 65536 00:17:37.827 }, 00:17:37.827 { 00:17:37.827 "name": "BaseBdev2", 00:17:37.827 "uuid": "8f2fde58-12ab-506d-bd1a-fc67ff928929", 00:17:37.827 "is_configured": true, 00:17:37.827 "data_offset": 0, 00:17:37.827 "data_size": 65536 00:17:37.827 }, 00:17:37.827 { 00:17:37.827 "name": "BaseBdev3", 00:17:37.827 "uuid": "a451cc86-e7c6-5f49-aaf8-f72d44d6ca0c", 00:17:37.827 "is_configured": true, 00:17:37.827 "data_offset": 0, 00:17:37.827 "data_size": 65536 00:17:37.827 }, 00:17:37.827 { 00:17:37.827 "name": "BaseBdev4", 00:17:37.828 "uuid": "9b584a8c-e0c6-58d7-bfb1-1c051fd5fed1", 00:17:37.828 "is_configured": true, 00:17:37.828 "data_offset": 0, 00:17:37.828 "data_size": 65536 00:17:37.828 } 00:17:37.828 ] 00:17:37.828 }' 00:17:37.828 18:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.828 18:16:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:37.828 [2024-12-06 18:16:03.315718] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:37.828 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:37.828 Zero copy mechanism will not be used. 00:17:37.828 Running I/O for 60 seconds... 00:17:38.406 18:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:38.406 18:16:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.407 18:16:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:38.407 [2024-12-06 18:16:03.767394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:38.407 18:16:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.407 18:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:38.407 [2024-12-06 18:16:03.853725] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:17:38.407 [2024-12-06 18:16:03.856600] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:38.666 [2024-12-06 18:16:03.963344] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:38.666 [2024-12-06 18:16:03.965231] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:38.666 [2024-12-06 18:16:04.173636] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:38.666 [2024-12-06 18:16:04.174085] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:39.184 152.00 IOPS, 456.00 MiB/s [2024-12-06T18:16:04.704Z] [2024-12-06 18:16:04.545873] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:39.444 [2024-12-06 18:16:04.772372] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:39.444 [2024-12-06 18:16:04.772816] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:39.444 18:16:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:39.444 18:16:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.444 18:16:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:39.444 18:16:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:39.444 18:16:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.444 18:16:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.444 18:16:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.444 18:16:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.444 18:16:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:39.444 18:16:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.444 18:16:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.444 "name": "raid_bdev1", 00:17:39.444 "uuid": "d79c2fc9-4b0a-427c-b6be-f15e9d2faaec", 00:17:39.444 "strip_size_kb": 0, 00:17:39.444 "state": "online", 00:17:39.444 "raid_level": "raid1", 00:17:39.444 "superblock": false, 00:17:39.444 "num_base_bdevs": 4, 00:17:39.444 "num_base_bdevs_discovered": 4, 00:17:39.444 "num_base_bdevs_operational": 4, 00:17:39.444 "process": { 00:17:39.444 "type": "rebuild", 00:17:39.444 "target": "spare", 00:17:39.444 "progress": { 00:17:39.444 "blocks": 10240, 00:17:39.444 "percent": 15 00:17:39.444 } 00:17:39.444 }, 00:17:39.445 "base_bdevs_list": [ 00:17:39.445 { 00:17:39.445 "name": "spare", 00:17:39.445 "uuid": "a74f4004-66df-5711-803e-976208b39fc6", 00:17:39.445 "is_configured": true, 00:17:39.445 "data_offset": 0, 00:17:39.445 "data_size": 65536 00:17:39.445 }, 00:17:39.445 { 00:17:39.445 "name": "BaseBdev2", 00:17:39.445 "uuid": "8f2fde58-12ab-506d-bd1a-fc67ff928929", 00:17:39.445 "is_configured": true, 00:17:39.445 "data_offset": 0, 00:17:39.445 "data_size": 65536 00:17:39.445 }, 00:17:39.445 { 00:17:39.445 "name": "BaseBdev3", 00:17:39.445 "uuid": "a451cc86-e7c6-5f49-aaf8-f72d44d6ca0c", 00:17:39.445 "is_configured": true, 00:17:39.445 "data_offset": 0, 00:17:39.445 "data_size": 65536 00:17:39.445 }, 00:17:39.445 { 00:17:39.445 "name": "BaseBdev4", 00:17:39.445 "uuid": "9b584a8c-e0c6-58d7-bfb1-1c051fd5fed1", 00:17:39.445 "is_configured": true, 00:17:39.445 "data_offset": 0, 00:17:39.445 "data_size": 65536 00:17:39.445 } 00:17:39.445 ] 00:17:39.445 }' 00:17:39.445 18:16:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:39.445 18:16:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:39.445 18:16:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:39.704 18:16:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:39.704 18:16:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:39.704 18:16:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.704 18:16:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:39.704 [2024-12-06 18:16:04.981549] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:39.705 [2024-12-06 18:16:05.135818] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:39.705 [2024-12-06 18:16:05.139333] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:39.705 [2024-12-06 18:16:05.139555] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:39.705 [2024-12-06 18:16:05.139587] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:39.705 [2024-12-06 18:16:05.182561] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:17:39.705 18:16:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.705 18:16:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:39.705 18:16:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:39.705 18:16:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:39.705 18:16:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:39.705 18:16:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:39.705 18:16:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:39.705 18:16:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.705 18:16:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.705 18:16:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.705 18:16:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.705 18:16:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.705 18:16:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.705 18:16:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.705 18:16:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:39.981 18:16:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.981 18:16:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.981 "name": "raid_bdev1", 00:17:39.981 "uuid": "d79c2fc9-4b0a-427c-b6be-f15e9d2faaec", 00:17:39.981 "strip_size_kb": 0, 00:17:39.981 "state": "online", 00:17:39.981 "raid_level": "raid1", 00:17:39.981 "superblock": false, 00:17:39.981 "num_base_bdevs": 4, 00:17:39.981 "num_base_bdevs_discovered": 3, 00:17:39.981 "num_base_bdevs_operational": 3, 00:17:39.981 "base_bdevs_list": [ 00:17:39.981 { 00:17:39.981 "name": null, 00:17:39.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.981 "is_configured": false, 00:17:39.981 "data_offset": 0, 00:17:39.981 "data_size": 65536 00:17:39.981 }, 00:17:39.981 { 00:17:39.981 "name": "BaseBdev2", 00:17:39.981 "uuid": "8f2fde58-12ab-506d-bd1a-fc67ff928929", 00:17:39.981 "is_configured": true, 00:17:39.981 "data_offset": 0, 00:17:39.981 "data_size": 65536 00:17:39.981 }, 00:17:39.981 { 00:17:39.981 "name": "BaseBdev3", 00:17:39.981 "uuid": "a451cc86-e7c6-5f49-aaf8-f72d44d6ca0c", 00:17:39.981 "is_configured": true, 00:17:39.981 "data_offset": 0, 00:17:39.981 "data_size": 65536 00:17:39.981 }, 00:17:39.981 { 00:17:39.981 "name": "BaseBdev4", 00:17:39.981 "uuid": "9b584a8c-e0c6-58d7-bfb1-1c051fd5fed1", 00:17:39.981 "is_configured": true, 00:17:39.981 "data_offset": 0, 00:17:39.981 "data_size": 65536 00:17:39.981 } 00:17:39.981 ] 00:17:39.981 }' 00:17:39.981 18:16:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.981 18:16:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:40.239 115.50 IOPS, 346.50 MiB/s [2024-12-06T18:16:05.759Z] 18:16:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:40.239 18:16:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:40.239 18:16:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:40.239 18:16:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:40.239 18:16:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:40.498 18:16:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.498 18:16:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.498 18:16:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.498 18:16:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:40.498 18:16:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.498 18:16:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:40.498 "name": "raid_bdev1", 00:17:40.498 "uuid": "d79c2fc9-4b0a-427c-b6be-f15e9d2faaec", 00:17:40.498 "strip_size_kb": 0, 00:17:40.498 "state": "online", 00:17:40.498 "raid_level": "raid1", 00:17:40.498 "superblock": false, 00:17:40.498 "num_base_bdevs": 4, 00:17:40.498 "num_base_bdevs_discovered": 3, 00:17:40.498 "num_base_bdevs_operational": 3, 00:17:40.498 "base_bdevs_list": [ 00:17:40.498 { 00:17:40.498 "name": null, 00:17:40.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.498 "is_configured": false, 00:17:40.498 "data_offset": 0, 00:17:40.498 "data_size": 65536 00:17:40.498 }, 00:17:40.498 { 00:17:40.498 "name": "BaseBdev2", 00:17:40.498 "uuid": "8f2fde58-12ab-506d-bd1a-fc67ff928929", 00:17:40.498 "is_configured": true, 00:17:40.498 "data_offset": 0, 00:17:40.498 "data_size": 65536 00:17:40.498 }, 00:17:40.498 { 00:17:40.498 "name": "BaseBdev3", 00:17:40.498 "uuid": "a451cc86-e7c6-5f49-aaf8-f72d44d6ca0c", 00:17:40.498 "is_configured": true, 00:17:40.498 "data_offset": 0, 00:17:40.498 "data_size": 65536 00:17:40.498 }, 00:17:40.498 { 00:17:40.498 "name": "BaseBdev4", 00:17:40.498 "uuid": "9b584a8c-e0c6-58d7-bfb1-1c051fd5fed1", 00:17:40.498 "is_configured": true, 00:17:40.498 "data_offset": 0, 00:17:40.498 "data_size": 65536 00:17:40.498 } 00:17:40.498 ] 00:17:40.498 }' 00:17:40.498 18:16:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:40.498 18:16:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:40.498 18:16:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:40.498 18:16:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:40.498 18:16:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:40.498 18:16:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.498 18:16:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:40.498 [2024-12-06 18:16:05.940502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:40.498 18:16:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.498 18:16:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:40.756 [2024-12-06 18:16:06.026737] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:40.756 [2024-12-06 18:16:06.029537] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:40.756 [2024-12-06 18:16:06.165111] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:41.016 133.00 IOPS, 399.00 MiB/s [2024-12-06T18:16:06.536Z] [2024-12-06 18:16:06.391975] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:41.016 [2024-12-06 18:16:06.392545] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:41.275 [2024-12-06 18:16:06.739893] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:41.534 [2024-12-06 18:16:06.883522] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:41.534 [2024-12-06 18:16:06.884820] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:41.534 18:16:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:41.534 18:16:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.534 18:16:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:41.534 18:16:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:41.534 18:16:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.534 18:16:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.534 18:16:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.534 18:16:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.534 18:16:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:41.534 18:16:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.793 18:16:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.793 "name": "raid_bdev1", 00:17:41.793 "uuid": "d79c2fc9-4b0a-427c-b6be-f15e9d2faaec", 00:17:41.793 "strip_size_kb": 0, 00:17:41.793 "state": "online", 00:17:41.793 "raid_level": "raid1", 00:17:41.793 "superblock": false, 00:17:41.793 "num_base_bdevs": 4, 00:17:41.793 "num_base_bdevs_discovered": 4, 00:17:41.793 "num_base_bdevs_operational": 4, 00:17:41.793 "process": { 00:17:41.793 "type": "rebuild", 00:17:41.793 "target": "spare", 00:17:41.793 "progress": { 00:17:41.793 "blocks": 10240, 00:17:41.793 "percent": 15 00:17:41.793 } 00:17:41.793 }, 00:17:41.793 "base_bdevs_list": [ 00:17:41.793 { 00:17:41.793 "name": "spare", 00:17:41.793 "uuid": "a74f4004-66df-5711-803e-976208b39fc6", 00:17:41.793 "is_configured": true, 00:17:41.793 "data_offset": 0, 00:17:41.793 "data_size": 65536 00:17:41.793 }, 00:17:41.793 { 00:17:41.793 "name": "BaseBdev2", 00:17:41.793 "uuid": "8f2fde58-12ab-506d-bd1a-fc67ff928929", 00:17:41.793 "is_configured": true, 00:17:41.793 "data_offset": 0, 00:17:41.793 "data_size": 65536 00:17:41.793 }, 00:17:41.793 { 00:17:41.793 "name": "BaseBdev3", 00:17:41.793 "uuid": "a451cc86-e7c6-5f49-aaf8-f72d44d6ca0c", 00:17:41.793 "is_configured": true, 00:17:41.793 "data_offset": 0, 00:17:41.793 "data_size": 65536 00:17:41.793 }, 00:17:41.793 { 00:17:41.793 "name": "BaseBdev4", 00:17:41.793 "uuid": "9b584a8c-e0c6-58d7-bfb1-1c051fd5fed1", 00:17:41.793 "is_configured": true, 00:17:41.793 "data_offset": 0, 00:17:41.793 "data_size": 65536 00:17:41.793 } 00:17:41.793 ] 00:17:41.793 }' 00:17:41.793 18:16:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.793 18:16:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:41.793 18:16:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.793 18:16:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:41.793 18:16:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:41.793 18:16:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:41.793 18:16:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:41.794 18:16:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:17:41.794 18:16:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:41.794 18:16:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.794 18:16:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:41.794 [2024-12-06 18:16:07.166764] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:41.794 [2024-12-06 18:16:07.227758] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:41.794 [2024-12-06 18:16:07.258756] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:17:41.794 [2024-12-06 18:16:07.258996] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:17:41.794 18:16:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.794 18:16:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:17:41.794 18:16:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:17:41.794 18:16:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:41.794 18:16:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.794 18:16:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:41.794 18:16:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:41.794 18:16:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.794 18:16:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.794 18:16:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.794 18:16:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.794 18:16:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:41.794 18:16:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.052 18:16:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:42.052 "name": "raid_bdev1", 00:17:42.052 "uuid": "d79c2fc9-4b0a-427c-b6be-f15e9d2faaec", 00:17:42.052 "strip_size_kb": 0, 00:17:42.052 "state": "online", 00:17:42.052 "raid_level": "raid1", 00:17:42.052 "superblock": false, 00:17:42.052 "num_base_bdevs": 4, 00:17:42.052 "num_base_bdevs_discovered": 3, 00:17:42.052 "num_base_bdevs_operational": 3, 00:17:42.052 "process": { 00:17:42.052 "type": "rebuild", 00:17:42.052 "target": "spare", 00:17:42.052 "progress": { 00:17:42.052 "blocks": 14336, 00:17:42.052 "percent": 21 00:17:42.052 } 00:17:42.052 }, 00:17:42.052 "base_bdevs_list": [ 00:17:42.052 { 00:17:42.052 "name": "spare", 00:17:42.052 "uuid": "a74f4004-66df-5711-803e-976208b39fc6", 00:17:42.052 "is_configured": true, 00:17:42.052 "data_offset": 0, 00:17:42.052 "data_size": 65536 00:17:42.052 }, 00:17:42.052 { 00:17:42.052 "name": null, 00:17:42.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.052 "is_configured": false, 00:17:42.052 "data_offset": 0, 00:17:42.052 "data_size": 65536 00:17:42.052 }, 00:17:42.052 { 00:17:42.052 "name": "BaseBdev3", 00:17:42.052 "uuid": "a451cc86-e7c6-5f49-aaf8-f72d44d6ca0c", 00:17:42.052 "is_configured": true, 00:17:42.052 "data_offset": 0, 00:17:42.052 "data_size": 65536 00:17:42.052 }, 00:17:42.052 { 00:17:42.052 "name": "BaseBdev4", 00:17:42.052 "uuid": "9b584a8c-e0c6-58d7-bfb1-1c051fd5fed1", 00:17:42.052 "is_configured": true, 00:17:42.052 "data_offset": 0, 00:17:42.052 "data_size": 65536 00:17:42.052 } 00:17:42.052 ] 00:17:42.052 }' 00:17:42.052 18:16:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:42.052 117.00 IOPS, 351.00 MiB/s [2024-12-06T18:16:07.572Z] 18:16:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:42.052 18:16:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:42.052 18:16:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:42.052 18:16:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=523 00:17:42.052 18:16:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:42.052 18:16:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:42.052 18:16:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:42.052 18:16:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:42.052 18:16:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:42.052 18:16:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:42.052 18:16:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.052 18:16:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.053 18:16:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.053 18:16:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:42.053 18:16:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.053 18:16:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:42.053 "name": "raid_bdev1", 00:17:42.053 "uuid": "d79c2fc9-4b0a-427c-b6be-f15e9d2faaec", 00:17:42.053 "strip_size_kb": 0, 00:17:42.053 "state": "online", 00:17:42.053 "raid_level": "raid1", 00:17:42.053 "superblock": false, 00:17:42.053 "num_base_bdevs": 4, 00:17:42.053 "num_base_bdevs_discovered": 3, 00:17:42.053 "num_base_bdevs_operational": 3, 00:17:42.053 "process": { 00:17:42.053 "type": "rebuild", 00:17:42.053 "target": "spare", 00:17:42.053 "progress": { 00:17:42.053 "blocks": 16384, 00:17:42.053 "percent": 25 00:17:42.053 } 00:17:42.053 }, 00:17:42.053 "base_bdevs_list": [ 00:17:42.053 { 00:17:42.053 "name": "spare", 00:17:42.053 "uuid": "a74f4004-66df-5711-803e-976208b39fc6", 00:17:42.053 "is_configured": true, 00:17:42.053 "data_offset": 0, 00:17:42.053 "data_size": 65536 00:17:42.053 }, 00:17:42.053 { 00:17:42.053 "name": null, 00:17:42.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.053 "is_configured": false, 00:17:42.053 "data_offset": 0, 00:17:42.053 "data_size": 65536 00:17:42.053 }, 00:17:42.053 { 00:17:42.053 "name": "BaseBdev3", 00:17:42.053 "uuid": "a451cc86-e7c6-5f49-aaf8-f72d44d6ca0c", 00:17:42.053 "is_configured": true, 00:17:42.053 "data_offset": 0, 00:17:42.053 "data_size": 65536 00:17:42.053 }, 00:17:42.053 { 00:17:42.053 "name": "BaseBdev4", 00:17:42.053 "uuid": "9b584a8c-e0c6-58d7-bfb1-1c051fd5fed1", 00:17:42.053 "is_configured": true, 00:17:42.053 "data_offset": 0, 00:17:42.053 "data_size": 65536 00:17:42.053 } 00:17:42.053 ] 00:17:42.053 }' 00:17:42.053 18:16:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:42.053 18:16:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:42.053 18:16:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:42.311 18:16:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:42.311 18:16:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:42.311 [2024-12-06 18:16:07.616278] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:17:42.311 [2024-12-06 18:16:07.616980] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:17:42.879 [2024-12-06 18:16:08.099088] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:17:43.138 106.60 IOPS, 319.80 MiB/s [2024-12-06T18:16:08.658Z] [2024-12-06 18:16:08.453158] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:17:43.138 18:16:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:43.138 18:16:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:43.138 18:16:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.138 18:16:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:43.138 18:16:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:43.138 18:16:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.138 18:16:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.138 18:16:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.138 18:16:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.138 18:16:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:43.138 18:16:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.398 18:16:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.398 "name": "raid_bdev1", 00:17:43.398 "uuid": "d79c2fc9-4b0a-427c-b6be-f15e9d2faaec", 00:17:43.398 "strip_size_kb": 0, 00:17:43.398 "state": "online", 00:17:43.398 "raid_level": "raid1", 00:17:43.398 "superblock": false, 00:17:43.398 "num_base_bdevs": 4, 00:17:43.398 "num_base_bdevs_discovered": 3, 00:17:43.398 "num_base_bdevs_operational": 3, 00:17:43.398 "process": { 00:17:43.398 "type": "rebuild", 00:17:43.398 "target": "spare", 00:17:43.398 "progress": { 00:17:43.398 "blocks": 36864, 00:17:43.398 "percent": 56 00:17:43.398 } 00:17:43.398 }, 00:17:43.398 "base_bdevs_list": [ 00:17:43.398 { 00:17:43.398 "name": "spare", 00:17:43.398 "uuid": "a74f4004-66df-5711-803e-976208b39fc6", 00:17:43.398 "is_configured": true, 00:17:43.398 "data_offset": 0, 00:17:43.398 "data_size": 65536 00:17:43.398 }, 00:17:43.398 { 00:17:43.398 "name": null, 00:17:43.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.398 "is_configured": false, 00:17:43.398 "data_offset": 0, 00:17:43.398 "data_size": 65536 00:17:43.398 }, 00:17:43.398 { 00:17:43.398 "name": "BaseBdev3", 00:17:43.398 "uuid": "a451cc86-e7c6-5f49-aaf8-f72d44d6ca0c", 00:17:43.398 "is_configured": true, 00:17:43.398 "data_offset": 0, 00:17:43.398 "data_size": 65536 00:17:43.398 }, 00:17:43.398 { 00:17:43.398 "name": "BaseBdev4", 00:17:43.398 "uuid": "9b584a8c-e0c6-58d7-bfb1-1c051fd5fed1", 00:17:43.398 "is_configured": true, 00:17:43.398 "data_offset": 0, 00:17:43.398 "data_size": 65536 00:17:43.398 } 00:17:43.398 ] 00:17:43.398 }' 00:17:43.398 18:16:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.398 18:16:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:43.398 18:16:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:43.398 18:16:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:43.398 18:16:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:43.967 [2024-12-06 18:16:09.187782] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:17:44.226 95.17 IOPS, 285.50 MiB/s [2024-12-06T18:16:09.746Z] [2024-12-06 18:16:09.627667] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:17:44.484 18:16:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:44.484 18:16:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:44.484 18:16:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:44.485 18:16:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:44.485 18:16:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:44.485 18:16:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:44.485 18:16:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.485 18:16:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.485 18:16:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.485 18:16:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:44.485 18:16:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.485 18:16:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:44.485 "name": "raid_bdev1", 00:17:44.485 "uuid": "d79c2fc9-4b0a-427c-b6be-f15e9d2faaec", 00:17:44.485 "strip_size_kb": 0, 00:17:44.485 "state": "online", 00:17:44.485 "raid_level": "raid1", 00:17:44.485 "superblock": false, 00:17:44.485 "num_base_bdevs": 4, 00:17:44.485 "num_base_bdevs_discovered": 3, 00:17:44.485 "num_base_bdevs_operational": 3, 00:17:44.485 "process": { 00:17:44.485 "type": "rebuild", 00:17:44.485 "target": "spare", 00:17:44.485 "progress": { 00:17:44.485 "blocks": 55296, 00:17:44.485 "percent": 84 00:17:44.485 } 00:17:44.485 }, 00:17:44.485 "base_bdevs_list": [ 00:17:44.485 { 00:17:44.485 "name": "spare", 00:17:44.485 "uuid": "a74f4004-66df-5711-803e-976208b39fc6", 00:17:44.485 "is_configured": true, 00:17:44.485 "data_offset": 0, 00:17:44.485 "data_size": 65536 00:17:44.485 }, 00:17:44.485 { 00:17:44.485 "name": null, 00:17:44.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.485 "is_configured": false, 00:17:44.485 "data_offset": 0, 00:17:44.485 "data_size": 65536 00:17:44.485 }, 00:17:44.485 { 00:17:44.485 "name": "BaseBdev3", 00:17:44.485 "uuid": "a451cc86-e7c6-5f49-aaf8-f72d44d6ca0c", 00:17:44.485 "is_configured": true, 00:17:44.485 "data_offset": 0, 00:17:44.485 "data_size": 65536 00:17:44.485 }, 00:17:44.485 { 00:17:44.485 "name": "BaseBdev4", 00:17:44.485 "uuid": "9b584a8c-e0c6-58d7-bfb1-1c051fd5fed1", 00:17:44.485 "is_configured": true, 00:17:44.485 "data_offset": 0, 00:17:44.485 "data_size": 65536 00:17:44.485 } 00:17:44.485 ] 00:17:44.485 }' 00:17:44.485 18:16:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:44.485 18:16:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:44.485 18:16:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:44.485 18:16:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:44.485 18:16:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:45.115 [2024-12-06 18:16:10.291099] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:45.115 87.43 IOPS, 262.29 MiB/s [2024-12-06T18:16:10.635Z] [2024-12-06 18:16:10.391109] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:45.115 [2024-12-06 18:16:10.393688] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:45.701 18:16:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:45.701 18:16:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:45.701 18:16:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:45.701 18:16:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:45.701 18:16:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:45.701 18:16:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:45.701 18:16:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.701 18:16:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.701 18:16:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.701 18:16:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:45.701 18:16:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.701 18:16:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:45.701 "name": "raid_bdev1", 00:17:45.701 "uuid": "d79c2fc9-4b0a-427c-b6be-f15e9d2faaec", 00:17:45.701 "strip_size_kb": 0, 00:17:45.701 "state": "online", 00:17:45.701 "raid_level": "raid1", 00:17:45.701 "superblock": false, 00:17:45.701 "num_base_bdevs": 4, 00:17:45.701 "num_base_bdevs_discovered": 3, 00:17:45.701 "num_base_bdevs_operational": 3, 00:17:45.701 "base_bdevs_list": [ 00:17:45.701 { 00:17:45.701 "name": "spare", 00:17:45.701 "uuid": "a74f4004-66df-5711-803e-976208b39fc6", 00:17:45.701 "is_configured": true, 00:17:45.701 "data_offset": 0, 00:17:45.701 "data_size": 65536 00:17:45.701 }, 00:17:45.701 { 00:17:45.701 "name": null, 00:17:45.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.701 "is_configured": false, 00:17:45.701 "data_offset": 0, 00:17:45.701 "data_size": 65536 00:17:45.701 }, 00:17:45.701 { 00:17:45.701 "name": "BaseBdev3", 00:17:45.701 "uuid": "a451cc86-e7c6-5f49-aaf8-f72d44d6ca0c", 00:17:45.701 "is_configured": true, 00:17:45.701 "data_offset": 0, 00:17:45.701 "data_size": 65536 00:17:45.701 }, 00:17:45.701 { 00:17:45.701 "name": "BaseBdev4", 00:17:45.701 "uuid": "9b584a8c-e0c6-58d7-bfb1-1c051fd5fed1", 00:17:45.701 "is_configured": true, 00:17:45.701 "data_offset": 0, 00:17:45.701 "data_size": 65536 00:17:45.701 } 00:17:45.701 ] 00:17:45.701 }' 00:17:45.701 18:16:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:45.701 18:16:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:45.701 18:16:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:45.701 18:16:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:45.701 18:16:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:17:45.701 18:16:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:45.701 18:16:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:45.701 18:16:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:45.701 18:16:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:45.701 18:16:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:45.701 18:16:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.701 18:16:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.701 18:16:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.701 18:16:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:45.701 18:16:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.701 18:16:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:45.701 "name": "raid_bdev1", 00:17:45.701 "uuid": "d79c2fc9-4b0a-427c-b6be-f15e9d2faaec", 00:17:45.701 "strip_size_kb": 0, 00:17:45.701 "state": "online", 00:17:45.701 "raid_level": "raid1", 00:17:45.701 "superblock": false, 00:17:45.701 "num_base_bdevs": 4, 00:17:45.701 "num_base_bdevs_discovered": 3, 00:17:45.701 "num_base_bdevs_operational": 3, 00:17:45.701 "base_bdevs_list": [ 00:17:45.701 { 00:17:45.701 "name": "spare", 00:17:45.701 "uuid": "a74f4004-66df-5711-803e-976208b39fc6", 00:17:45.701 "is_configured": true, 00:17:45.701 "data_offset": 0, 00:17:45.702 "data_size": 65536 00:17:45.702 }, 00:17:45.702 { 00:17:45.702 "name": null, 00:17:45.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.702 "is_configured": false, 00:17:45.702 "data_offset": 0, 00:17:45.702 "data_size": 65536 00:17:45.702 }, 00:17:45.702 { 00:17:45.702 "name": "BaseBdev3", 00:17:45.702 "uuid": "a451cc86-e7c6-5f49-aaf8-f72d44d6ca0c", 00:17:45.702 "is_configured": true, 00:17:45.702 "data_offset": 0, 00:17:45.702 "data_size": 65536 00:17:45.702 }, 00:17:45.702 { 00:17:45.702 "name": "BaseBdev4", 00:17:45.702 "uuid": "9b584a8c-e0c6-58d7-bfb1-1c051fd5fed1", 00:17:45.702 "is_configured": true, 00:17:45.702 "data_offset": 0, 00:17:45.702 "data_size": 65536 00:17:45.702 } 00:17:45.702 ] 00:17:45.702 }' 00:17:45.702 18:16:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:45.961 18:16:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:45.961 18:16:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:45.961 18:16:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:45.961 18:16:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:45.961 18:16:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:45.961 18:16:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:45.961 18:16:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:45.961 18:16:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:45.961 18:16:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:45.961 18:16:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.961 18:16:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.961 18:16:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.961 18:16:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.961 18:16:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.961 18:16:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.961 18:16:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:45.961 18:16:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.961 18:16:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.961 18:16:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.961 "name": "raid_bdev1", 00:17:45.961 "uuid": "d79c2fc9-4b0a-427c-b6be-f15e9d2faaec", 00:17:45.961 "strip_size_kb": 0, 00:17:45.961 "state": "online", 00:17:45.961 "raid_level": "raid1", 00:17:45.961 "superblock": false, 00:17:45.961 "num_base_bdevs": 4, 00:17:45.961 "num_base_bdevs_discovered": 3, 00:17:45.961 "num_base_bdevs_operational": 3, 00:17:45.961 "base_bdevs_list": [ 00:17:45.961 { 00:17:45.961 "name": "spare", 00:17:45.961 "uuid": "a74f4004-66df-5711-803e-976208b39fc6", 00:17:45.961 "is_configured": true, 00:17:45.961 "data_offset": 0, 00:17:45.961 "data_size": 65536 00:17:45.961 }, 00:17:45.961 { 00:17:45.961 "name": null, 00:17:45.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.961 "is_configured": false, 00:17:45.961 "data_offset": 0, 00:17:45.961 "data_size": 65536 00:17:45.961 }, 00:17:45.961 { 00:17:45.961 "name": "BaseBdev3", 00:17:45.961 "uuid": "a451cc86-e7c6-5f49-aaf8-f72d44d6ca0c", 00:17:45.961 "is_configured": true, 00:17:45.961 "data_offset": 0, 00:17:45.961 "data_size": 65536 00:17:45.961 }, 00:17:45.961 { 00:17:45.961 "name": "BaseBdev4", 00:17:45.961 "uuid": "9b584a8c-e0c6-58d7-bfb1-1c051fd5fed1", 00:17:45.961 "is_configured": true, 00:17:45.961 "data_offset": 0, 00:17:45.961 "data_size": 65536 00:17:45.961 } 00:17:45.961 ] 00:17:45.961 }' 00:17:45.961 18:16:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.961 18:16:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:46.529 80.62 IOPS, 241.88 MiB/s [2024-12-06T18:16:12.049Z] 18:16:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:46.529 18:16:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.529 18:16:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:46.529 [2024-12-06 18:16:11.818713] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:46.529 [2024-12-06 18:16:11.818910] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:46.529 00:17:46.529 Latency(us) 00:17:46.529 [2024-12-06T18:16:12.049Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:46.529 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:17:46.529 raid_bdev1 : 8.54 77.63 232.90 0.00 0.00 18231.74 320.23 117249.86 00:17:46.529 [2024-12-06T18:16:12.049Z] =================================================================================================================== 00:17:46.529 [2024-12-06T18:16:12.049Z] Total : 77.63 232.90 0.00 0.00 18231.74 320.23 117249.86 00:17:46.529 [2024-12-06 18:16:11.878677] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:46.529 [2024-12-06 18:16:11.878793] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:46.529 [2024-12-06 18:16:11.878929] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:46.529 [2024-12-06 18:16:11.878954] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:46.529 { 00:17:46.529 "results": [ 00:17:46.529 { 00:17:46.529 "job": "raid_bdev1", 00:17:46.529 "core_mask": "0x1", 00:17:46.529 "workload": "randrw", 00:17:46.529 "percentage": 50, 00:17:46.529 "status": "finished", 00:17:46.529 "queue_depth": 2, 00:17:46.529 "io_size": 3145728, 00:17:46.529 "runtime": 8.54001, 00:17:46.529 "iops": 77.63456951455561, 00:17:46.529 "mibps": 232.90370854366682, 00:17:46.529 "io_failed": 0, 00:17:46.529 "io_timeout": 0, 00:17:46.529 "avg_latency_us": 18231.74221856575, 00:17:46.529 "min_latency_us": 320.2327272727273, 00:17:46.529 "max_latency_us": 117249.86181818182 00:17:46.529 } 00:17:46.529 ], 00:17:46.529 "core_count": 1 00:17:46.529 } 00:17:46.529 18:16:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.529 18:16:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.529 18:16:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.529 18:16:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:46.529 18:16:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:17:46.529 18:16:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.529 18:16:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:46.529 18:16:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:46.529 18:16:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:17:46.529 18:16:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:17:46.529 18:16:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:46.529 18:16:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:17:46.529 18:16:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:46.529 18:16:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:46.529 18:16:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:46.529 18:16:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:17:46.529 18:16:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:46.529 18:16:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:46.529 18:16:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:17:46.788 /dev/nbd0 00:17:47.047 18:16:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:47.047 18:16:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:47.047 18:16:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:47.047 18:16:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:17:47.047 18:16:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:47.047 18:16:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:47.047 18:16:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:47.047 18:16:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:17:47.047 18:16:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:47.047 18:16:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:47.047 18:16:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:47.047 1+0 records in 00:17:47.047 1+0 records out 00:17:47.047 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00050382 s, 8.1 MB/s 00:17:47.047 18:16:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:47.047 18:16:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:17:47.047 18:16:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:47.047 18:16:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:47.047 18:16:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:17:47.047 18:16:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:47.047 18:16:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:47.047 18:16:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:47.047 18:16:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:17:47.047 18:16:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:17:47.047 18:16:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:47.047 18:16:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:17:47.047 18:16:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:17:47.047 18:16:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:47.047 18:16:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:17:47.047 18:16:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:47.047 18:16:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:17:47.047 18:16:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:47.047 18:16:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:17:47.047 18:16:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:47.047 18:16:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:47.047 18:16:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:17:47.318 /dev/nbd1 00:17:47.318 18:16:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:47.318 18:16:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:47.318 18:16:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:47.318 18:16:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:17:47.318 18:16:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:47.318 18:16:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:47.318 18:16:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:47.318 18:16:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:17:47.318 18:16:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:47.318 18:16:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:47.318 18:16:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:47.318 1+0 records in 00:17:47.318 1+0 records out 00:17:47.318 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000629426 s, 6.5 MB/s 00:17:47.318 18:16:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:47.318 18:16:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:17:47.318 18:16:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:47.318 18:16:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:47.318 18:16:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:17:47.318 18:16:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:47.318 18:16:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:47.318 18:16:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:47.578 18:16:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:17:47.578 18:16:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:47.578 18:16:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:17:47.578 18:16:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:47.578 18:16:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:17:47.578 18:16:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:47.578 18:16:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:47.837 18:16:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:47.837 18:16:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:47.837 18:16:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:47.837 18:16:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:47.837 18:16:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:47.837 18:16:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:47.837 18:16:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:17:47.837 18:16:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:47.837 18:16:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:47.837 18:16:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:17:47.837 18:16:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:17:47.837 18:16:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:47.837 18:16:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:17:47.837 18:16:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:47.837 18:16:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:17:47.837 18:16:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:47.837 18:16:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:17:47.837 18:16:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:47.837 18:16:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:47.837 18:16:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:17:48.096 /dev/nbd1 00:17:48.096 18:16:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:48.096 18:16:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:48.096 18:16:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:48.096 18:16:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:17:48.096 18:16:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:48.096 18:16:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:48.096 18:16:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:48.096 18:16:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:17:48.096 18:16:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:48.096 18:16:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:48.096 18:16:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:48.096 1+0 records in 00:17:48.096 1+0 records out 00:17:48.096 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000679718 s, 6.0 MB/s 00:17:48.096 18:16:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:48.096 18:16:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:17:48.096 18:16:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:48.096 18:16:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:48.096 18:16:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:17:48.096 18:16:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:48.096 18:16:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:48.096 18:16:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:48.096 18:16:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:17:48.096 18:16:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:48.096 18:16:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:17:48.096 18:16:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:48.096 18:16:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:17:48.096 18:16:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:48.096 18:16:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:48.355 18:16:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:48.355 18:16:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:48.355 18:16:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:48.355 18:16:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:48.355 18:16:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:48.355 18:16:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:48.355 18:16:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:17:48.355 18:16:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:48.355 18:16:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:48.355 18:16:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:48.355 18:16:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:48.355 18:16:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:48.355 18:16:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:17:48.355 18:16:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:48.355 18:16:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:48.968 18:16:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:48.968 18:16:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:48.968 18:16:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:48.968 18:16:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:48.968 18:16:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:48.968 18:16:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:48.968 18:16:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:17:48.968 18:16:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:48.968 18:16:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:48.968 18:16:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 79114 00:17:48.968 18:16:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 79114 ']' 00:17:48.968 18:16:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 79114 00:17:48.968 18:16:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:17:48.968 18:16:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:48.968 18:16:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79114 00:17:48.968 killing process with pid 79114 00:17:48.968 Received shutdown signal, test time was about 10.892536 seconds 00:17:48.968 00:17:48.968 Latency(us) 00:17:48.968 [2024-12-06T18:16:14.488Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.968 [2024-12-06T18:16:14.488Z] =================================================================================================================== 00:17:48.968 [2024-12-06T18:16:14.488Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:48.968 18:16:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:48.968 18:16:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:48.968 18:16:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79114' 00:17:48.968 18:16:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 79114 00:17:48.968 [2024-12-06 18:16:14.211163] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:48.968 18:16:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 79114 00:17:49.236 [2024-12-06 18:16:14.608417] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:50.609 ************************************ 00:17:50.609 END TEST raid_rebuild_test_io 00:17:50.609 ************************************ 00:17:50.609 18:16:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:17:50.609 00:17:50.609 real 0m14.552s 00:17:50.609 user 0m19.210s 00:17:50.609 sys 0m1.777s 00:17:50.609 18:16:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:50.609 18:16:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:50.609 18:16:15 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:17:50.609 18:16:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:50.609 18:16:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:50.609 18:16:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:50.609 ************************************ 00:17:50.609 START TEST raid_rebuild_test_sb_io 00:17:50.609 ************************************ 00:17:50.609 18:16:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:17:50.609 18:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:50.609 18:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:50.609 18:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:50.609 18:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:17:50.609 18:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:50.609 18:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:50.609 18:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:50.609 18:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:50.609 18:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:50.609 18:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:50.609 18:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:50.609 18:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:50.609 18:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:50.609 18:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:50.609 18:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:50.609 18:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:50.609 18:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:50.609 18:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:50.609 18:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:50.609 18:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:50.609 18:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:50.609 18:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:50.609 18:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:50.609 18:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:50.609 18:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:50.609 18:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:50.609 18:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:50.609 18:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:50.609 18:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:50.609 18:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:50.609 18:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79535 00:17:50.609 18:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79535 00:17:50.609 18:16:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79535 ']' 00:17:50.609 18:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:50.609 18:16:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.609 18:16:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:50.609 18:16:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.609 18:16:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:50.609 18:16:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:50.609 [2024-12-06 18:16:15.906730] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:17:50.609 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:50.609 Zero copy mechanism will not be used. 00:17:50.609 [2024-12-06 18:16:15.907159] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79535 ] 00:17:50.609 [2024-12-06 18:16:16.080276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.867 [2024-12-06 18:16:16.216569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.126 [2024-12-06 18:16:16.455433] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:51.126 [2024-12-06 18:16:16.455628] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:51.384 18:16:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:51.384 18:16:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:17:51.384 18:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:51.384 18:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:51.384 18:16:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.384 18:16:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:51.642 BaseBdev1_malloc 00:17:51.642 18:16:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.642 18:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:51.642 18:16:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.642 18:16:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:51.642 [2024-12-06 18:16:16.940887] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:51.642 [2024-12-06 18:16:16.941220] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.642 [2024-12-06 18:16:16.941262] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:51.642 [2024-12-06 18:16:16.941281] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.642 [2024-12-06 18:16:16.944345] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.642 [2024-12-06 18:16:16.944566] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:51.642 BaseBdev1 00:17:51.642 18:16:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.642 18:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:51.642 18:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:51.642 18:16:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.642 18:16:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:51.642 BaseBdev2_malloc 00:17:51.642 18:16:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.642 18:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:51.642 18:16:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.642 18:16:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:51.642 [2024-12-06 18:16:16.994240] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:51.642 [2024-12-06 18:16:16.994317] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.642 [2024-12-06 18:16:16.994349] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:51.642 [2024-12-06 18:16:16.994368] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.642 [2024-12-06 18:16:16.997196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.642 [2024-12-06 18:16:16.997243] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:51.642 BaseBdev2 00:17:51.642 18:16:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.642 18:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:51.642 18:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:51.642 18:16:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.642 18:16:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:51.642 BaseBdev3_malloc 00:17:51.642 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.642 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:51.642 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.642 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:51.642 [2024-12-06 18:16:17.056842] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:51.642 [2024-12-06 18:16:17.056911] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.642 [2024-12-06 18:16:17.056942] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:51.642 [2024-12-06 18:16:17.056960] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.642 [2024-12-06 18:16:17.059952] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.642 [2024-12-06 18:16:17.060052] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:51.642 BaseBdev3 00:17:51.642 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.642 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:51.642 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:51.642 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.642 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:51.642 BaseBdev4_malloc 00:17:51.642 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.642 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:51.642 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.642 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:51.642 [2024-12-06 18:16:17.105244] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:51.642 [2024-12-06 18:16:17.105314] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.642 [2024-12-06 18:16:17.105342] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:51.642 [2024-12-06 18:16:17.105359] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.642 [2024-12-06 18:16:17.108304] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.642 [2024-12-06 18:16:17.108355] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:51.642 BaseBdev4 00:17:51.642 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.642 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:51.642 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.642 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:51.642 spare_malloc 00:17:51.642 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.642 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:51.642 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.642 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:51.642 spare_delay 00:17:51.642 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.642 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:51.642 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.642 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:51.901 [2024-12-06 18:16:17.162457] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:51.901 [2024-12-06 18:16:17.162547] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.901 [2024-12-06 18:16:17.162576] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:51.901 [2024-12-06 18:16:17.162594] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.901 [2024-12-06 18:16:17.165464] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.901 [2024-12-06 18:16:17.165540] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:51.901 spare 00:17:51.901 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.901 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:51.901 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.901 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:51.901 [2024-12-06 18:16:17.170560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:51.901 [2024-12-06 18:16:17.173118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:51.901 [2024-12-06 18:16:17.173229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:51.901 [2024-12-06 18:16:17.173307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:51.901 [2024-12-06 18:16:17.173542] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:51.901 [2024-12-06 18:16:17.173577] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:51.901 [2024-12-06 18:16:17.173935] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:51.901 [2024-12-06 18:16:17.174183] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:51.901 [2024-12-06 18:16:17.174200] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:51.901 [2024-12-06 18:16:17.174457] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:51.901 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.901 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:51.901 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:51.901 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:51.901 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:51.901 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:51.901 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:51.901 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.902 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.902 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.902 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.902 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.902 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.902 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.902 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:51.902 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.902 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.902 "name": "raid_bdev1", 00:17:51.902 "uuid": "1ba8ceff-faf6-4fb8-b7bb-3514db0f2482", 00:17:51.902 "strip_size_kb": 0, 00:17:51.902 "state": "online", 00:17:51.902 "raid_level": "raid1", 00:17:51.902 "superblock": true, 00:17:51.902 "num_base_bdevs": 4, 00:17:51.902 "num_base_bdevs_discovered": 4, 00:17:51.902 "num_base_bdevs_operational": 4, 00:17:51.902 "base_bdevs_list": [ 00:17:51.902 { 00:17:51.902 "name": "BaseBdev1", 00:17:51.902 "uuid": "9e583d17-4b60-56d7-adc8-3c300ef83850", 00:17:51.902 "is_configured": true, 00:17:51.902 "data_offset": 2048, 00:17:51.902 "data_size": 63488 00:17:51.902 }, 00:17:51.902 { 00:17:51.902 "name": "BaseBdev2", 00:17:51.902 "uuid": "11c40c38-7288-5e2a-bf47-41319242998f", 00:17:51.902 "is_configured": true, 00:17:51.902 "data_offset": 2048, 00:17:51.902 "data_size": 63488 00:17:51.902 }, 00:17:51.902 { 00:17:51.902 "name": "BaseBdev3", 00:17:51.902 "uuid": "0ab6fe4e-5837-560b-9578-f334f0479d12", 00:17:51.902 "is_configured": true, 00:17:51.902 "data_offset": 2048, 00:17:51.902 "data_size": 63488 00:17:51.902 }, 00:17:51.902 { 00:17:51.902 "name": "BaseBdev4", 00:17:51.902 "uuid": "81e28c54-84ec-59aa-b66d-aecbcad47ae6", 00:17:51.902 "is_configured": true, 00:17:51.902 "data_offset": 2048, 00:17:51.902 "data_size": 63488 00:17:51.902 } 00:17:51.902 ] 00:17:51.902 }' 00:17:51.902 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.902 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:52.469 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:52.469 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:52.469 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.469 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:52.469 [2024-12-06 18:16:17.739192] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:52.469 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.469 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:17:52.469 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.469 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.469 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:52.469 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:52.469 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.469 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:52.469 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:17:52.469 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:52.469 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:52.469 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.469 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:52.469 [2024-12-06 18:16:17.842712] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:52.469 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.469 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:52.469 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:52.469 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:52.469 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:52.469 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:52.469 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:52.469 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.469 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.469 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.469 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.469 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.469 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.469 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.469 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:52.469 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.469 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.469 "name": "raid_bdev1", 00:17:52.469 "uuid": "1ba8ceff-faf6-4fb8-b7bb-3514db0f2482", 00:17:52.469 "strip_size_kb": 0, 00:17:52.469 "state": "online", 00:17:52.469 "raid_level": "raid1", 00:17:52.469 "superblock": true, 00:17:52.469 "num_base_bdevs": 4, 00:17:52.469 "num_base_bdevs_discovered": 3, 00:17:52.469 "num_base_bdevs_operational": 3, 00:17:52.469 "base_bdevs_list": [ 00:17:52.469 { 00:17:52.469 "name": null, 00:17:52.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.469 "is_configured": false, 00:17:52.469 "data_offset": 0, 00:17:52.469 "data_size": 63488 00:17:52.469 }, 00:17:52.469 { 00:17:52.469 "name": "BaseBdev2", 00:17:52.469 "uuid": "11c40c38-7288-5e2a-bf47-41319242998f", 00:17:52.469 "is_configured": true, 00:17:52.469 "data_offset": 2048, 00:17:52.469 "data_size": 63488 00:17:52.469 }, 00:17:52.469 { 00:17:52.469 "name": "BaseBdev3", 00:17:52.469 "uuid": "0ab6fe4e-5837-560b-9578-f334f0479d12", 00:17:52.469 "is_configured": true, 00:17:52.469 "data_offset": 2048, 00:17:52.469 "data_size": 63488 00:17:52.470 }, 00:17:52.470 { 00:17:52.470 "name": "BaseBdev4", 00:17:52.470 "uuid": "81e28c54-84ec-59aa-b66d-aecbcad47ae6", 00:17:52.470 "is_configured": true, 00:17:52.470 "data_offset": 2048, 00:17:52.470 "data_size": 63488 00:17:52.470 } 00:17:52.470 ] 00:17:52.470 }' 00:17:52.470 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.470 18:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:52.470 [2024-12-06 18:16:17.975059] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:52.470 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:52.470 Zero copy mechanism will not be used. 00:17:52.470 Running I/O for 60 seconds... 00:17:53.037 18:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:53.037 18:16:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.037 18:16:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:53.037 [2024-12-06 18:16:18.393590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:53.037 18:16:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.037 18:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:53.037 [2024-12-06 18:16:18.494239] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:17:53.037 [2024-12-06 18:16:18.497113] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:53.296 [2024-12-06 18:16:18.617299] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:53.296 [2024-12-06 18:16:18.619290] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:53.556 [2024-12-06 18:16:18.844844] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:53.556 [2024-12-06 18:16:18.845967] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:53.814 128.00 IOPS, 384.00 MiB/s [2024-12-06T18:16:19.334Z] [2024-12-06 18:16:19.195425] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:53.814 [2024-12-06 18:16:19.331978] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:54.080 18:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:54.080 18:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:54.080 18:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:54.080 18:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:54.080 18:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:54.080 18:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.080 18:16:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.080 18:16:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:54.080 18:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.080 18:16:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.080 18:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:54.080 "name": "raid_bdev1", 00:17:54.080 "uuid": "1ba8ceff-faf6-4fb8-b7bb-3514db0f2482", 00:17:54.080 "strip_size_kb": 0, 00:17:54.080 "state": "online", 00:17:54.080 "raid_level": "raid1", 00:17:54.080 "superblock": true, 00:17:54.080 "num_base_bdevs": 4, 00:17:54.080 "num_base_bdevs_discovered": 4, 00:17:54.080 "num_base_bdevs_operational": 4, 00:17:54.080 "process": { 00:17:54.080 "type": "rebuild", 00:17:54.080 "target": "spare", 00:17:54.080 "progress": { 00:17:54.080 "blocks": 10240, 00:17:54.080 "percent": 16 00:17:54.080 } 00:17:54.080 }, 00:17:54.080 "base_bdevs_list": [ 00:17:54.080 { 00:17:54.080 "name": "spare", 00:17:54.080 "uuid": "c32fefe9-e4e1-58b1-ac97-3438a3cde707", 00:17:54.080 "is_configured": true, 00:17:54.080 "data_offset": 2048, 00:17:54.080 "data_size": 63488 00:17:54.080 }, 00:17:54.080 { 00:17:54.080 "name": "BaseBdev2", 00:17:54.080 "uuid": "11c40c38-7288-5e2a-bf47-41319242998f", 00:17:54.080 "is_configured": true, 00:17:54.080 "data_offset": 2048, 00:17:54.080 "data_size": 63488 00:17:54.080 }, 00:17:54.080 { 00:17:54.080 "name": "BaseBdev3", 00:17:54.080 "uuid": "0ab6fe4e-5837-560b-9578-f334f0479d12", 00:17:54.080 "is_configured": true, 00:17:54.080 "data_offset": 2048, 00:17:54.080 "data_size": 63488 00:17:54.080 }, 00:17:54.080 { 00:17:54.080 "name": "BaseBdev4", 00:17:54.080 "uuid": "81e28c54-84ec-59aa-b66d-aecbcad47ae6", 00:17:54.080 "is_configured": true, 00:17:54.080 "data_offset": 2048, 00:17:54.080 "data_size": 63488 00:17:54.080 } 00:17:54.080 ] 00:17:54.080 }' 00:17:54.080 18:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:54.080 18:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:54.080 18:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:54.340 18:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:54.340 18:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:54.340 18:16:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.340 18:16:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:54.340 [2024-12-06 18:16:19.622592] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:54.340 [2024-12-06 18:16:19.625474] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:54.340 [2024-12-06 18:16:19.746261] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:54.602 [2024-12-06 18:16:19.858477] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:54.602 [2024-12-06 18:16:19.874133] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:54.602 [2024-12-06 18:16:19.874458] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:54.602 [2024-12-06 18:16:19.874546] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:54.602 [2024-12-06 18:16:19.916667] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:17:54.602 18:16:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.602 18:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:54.602 18:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:54.602 18:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:54.602 18:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:54.602 18:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:54.602 18:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:54.602 18:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.602 18:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.602 18:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.602 18:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.602 18:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.602 18:16:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.602 18:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.602 18:16:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:54.602 18:16:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.602 107.50 IOPS, 322.50 MiB/s [2024-12-06T18:16:20.122Z] 18:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.602 "name": "raid_bdev1", 00:17:54.602 "uuid": "1ba8ceff-faf6-4fb8-b7bb-3514db0f2482", 00:17:54.602 "strip_size_kb": 0, 00:17:54.602 "state": "online", 00:17:54.602 "raid_level": "raid1", 00:17:54.602 "superblock": true, 00:17:54.602 "num_base_bdevs": 4, 00:17:54.602 "num_base_bdevs_discovered": 3, 00:17:54.602 "num_base_bdevs_operational": 3, 00:17:54.602 "base_bdevs_list": [ 00:17:54.602 { 00:17:54.602 "name": null, 00:17:54.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.602 "is_configured": false, 00:17:54.602 "data_offset": 0, 00:17:54.602 "data_size": 63488 00:17:54.602 }, 00:17:54.602 { 00:17:54.602 "name": "BaseBdev2", 00:17:54.602 "uuid": "11c40c38-7288-5e2a-bf47-41319242998f", 00:17:54.602 "is_configured": true, 00:17:54.602 "data_offset": 2048, 00:17:54.602 "data_size": 63488 00:17:54.603 }, 00:17:54.603 { 00:17:54.603 "name": "BaseBdev3", 00:17:54.603 "uuid": "0ab6fe4e-5837-560b-9578-f334f0479d12", 00:17:54.603 "is_configured": true, 00:17:54.603 "data_offset": 2048, 00:17:54.603 "data_size": 63488 00:17:54.603 }, 00:17:54.603 { 00:17:54.603 "name": "BaseBdev4", 00:17:54.603 "uuid": "81e28c54-84ec-59aa-b66d-aecbcad47ae6", 00:17:54.603 "is_configured": true, 00:17:54.603 "data_offset": 2048, 00:17:54.603 "data_size": 63488 00:17:54.603 } 00:17:54.603 ] 00:17:54.603 }' 00:17:54.603 18:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.603 18:16:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:55.174 18:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:55.174 18:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:55.174 18:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:55.174 18:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:55.174 18:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:55.174 18:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.174 18:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.174 18:16:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.174 18:16:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:55.174 18:16:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.174 18:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:55.174 "name": "raid_bdev1", 00:17:55.174 "uuid": "1ba8ceff-faf6-4fb8-b7bb-3514db0f2482", 00:17:55.174 "strip_size_kb": 0, 00:17:55.174 "state": "online", 00:17:55.174 "raid_level": "raid1", 00:17:55.174 "superblock": true, 00:17:55.174 "num_base_bdevs": 4, 00:17:55.174 "num_base_bdevs_discovered": 3, 00:17:55.174 "num_base_bdevs_operational": 3, 00:17:55.174 "base_bdevs_list": [ 00:17:55.174 { 00:17:55.174 "name": null, 00:17:55.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.174 "is_configured": false, 00:17:55.174 "data_offset": 0, 00:17:55.174 "data_size": 63488 00:17:55.174 }, 00:17:55.174 { 00:17:55.174 "name": "BaseBdev2", 00:17:55.174 "uuid": "11c40c38-7288-5e2a-bf47-41319242998f", 00:17:55.174 "is_configured": true, 00:17:55.174 "data_offset": 2048, 00:17:55.174 "data_size": 63488 00:17:55.174 }, 00:17:55.174 { 00:17:55.174 "name": "BaseBdev3", 00:17:55.174 "uuid": "0ab6fe4e-5837-560b-9578-f334f0479d12", 00:17:55.174 "is_configured": true, 00:17:55.174 "data_offset": 2048, 00:17:55.174 "data_size": 63488 00:17:55.174 }, 00:17:55.174 { 00:17:55.174 "name": "BaseBdev4", 00:17:55.174 "uuid": "81e28c54-84ec-59aa-b66d-aecbcad47ae6", 00:17:55.174 "is_configured": true, 00:17:55.174 "data_offset": 2048, 00:17:55.174 "data_size": 63488 00:17:55.174 } 00:17:55.174 ] 00:17:55.174 }' 00:17:55.174 18:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:55.174 18:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:55.174 18:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:55.174 18:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:55.174 18:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:55.174 18:16:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.174 18:16:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:55.174 [2024-12-06 18:16:20.631689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:55.174 18:16:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.174 18:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:55.435 [2024-12-06 18:16:20.691729] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:55.435 [2024-12-06 18:16:20.694871] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:55.435 [2024-12-06 18:16:20.808608] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:55.435 [2024-12-06 18:16:20.810248] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:55.694 120.33 IOPS, 361.00 MiB/s [2024-12-06T18:16:21.214Z] [2024-12-06 18:16:21.033338] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:55.694 [2024-12-06 18:16:21.033954] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:55.953 [2024-12-06 18:16:21.373355] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:55.953 [2024-12-06 18:16:21.374353] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:56.212 [2024-12-06 18:16:21.587227] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:56.212 [2024-12-06 18:16:21.588418] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:56.212 18:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:56.212 18:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:56.212 18:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:56.212 18:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:56.212 18:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:56.212 18:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.212 18:16:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.212 18:16:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:56.212 18:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.212 18:16:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.471 18:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:56.471 "name": "raid_bdev1", 00:17:56.471 "uuid": "1ba8ceff-faf6-4fb8-b7bb-3514db0f2482", 00:17:56.471 "strip_size_kb": 0, 00:17:56.471 "state": "online", 00:17:56.471 "raid_level": "raid1", 00:17:56.471 "superblock": true, 00:17:56.471 "num_base_bdevs": 4, 00:17:56.471 "num_base_bdevs_discovered": 4, 00:17:56.471 "num_base_bdevs_operational": 4, 00:17:56.471 "process": { 00:17:56.471 "type": "rebuild", 00:17:56.471 "target": "spare", 00:17:56.471 "progress": { 00:17:56.471 "blocks": 10240, 00:17:56.471 "percent": 16 00:17:56.471 } 00:17:56.471 }, 00:17:56.471 "base_bdevs_list": [ 00:17:56.471 { 00:17:56.471 "name": "spare", 00:17:56.471 "uuid": "c32fefe9-e4e1-58b1-ac97-3438a3cde707", 00:17:56.471 "is_configured": true, 00:17:56.471 "data_offset": 2048, 00:17:56.471 "data_size": 63488 00:17:56.471 }, 00:17:56.471 { 00:17:56.471 "name": "BaseBdev2", 00:17:56.471 "uuid": "11c40c38-7288-5e2a-bf47-41319242998f", 00:17:56.471 "is_configured": true, 00:17:56.471 "data_offset": 2048, 00:17:56.471 "data_size": 63488 00:17:56.471 }, 00:17:56.471 { 00:17:56.471 "name": "BaseBdev3", 00:17:56.471 "uuid": "0ab6fe4e-5837-560b-9578-f334f0479d12", 00:17:56.471 "is_configured": true, 00:17:56.471 "data_offset": 2048, 00:17:56.471 "data_size": 63488 00:17:56.471 }, 00:17:56.471 { 00:17:56.471 "name": "BaseBdev4", 00:17:56.471 "uuid": "81e28c54-84ec-59aa-b66d-aecbcad47ae6", 00:17:56.471 "is_configured": true, 00:17:56.471 "data_offset": 2048, 00:17:56.471 "data_size": 63488 00:17:56.471 } 00:17:56.471 ] 00:17:56.471 }' 00:17:56.471 18:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:56.471 18:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:56.471 18:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:56.471 18:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:56.471 18:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:56.471 18:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:56.471 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:56.471 18:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:56.471 18:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:56.471 18:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:17:56.471 18:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:56.471 18:16:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.471 18:16:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:56.471 [2024-12-06 18:16:21.860028] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:56.471 [2024-12-06 18:16:21.919160] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:56.731 110.00 IOPS, 330.00 MiB/s [2024-12-06T18:16:22.251Z] [2024-12-06 18:16:22.122810] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:17:56.731 [2024-12-06 18:16:22.123078] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:17:56.731 18:16:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.731 18:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:17:56.731 18:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:17:56.731 18:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:56.731 18:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:56.731 18:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:56.731 18:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:56.731 18:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:56.731 18:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.731 18:16:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.731 18:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.731 18:16:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:56.731 18:16:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.731 18:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:56.731 "name": "raid_bdev1", 00:17:56.731 "uuid": "1ba8ceff-faf6-4fb8-b7bb-3514db0f2482", 00:17:56.731 "strip_size_kb": 0, 00:17:56.731 "state": "online", 00:17:56.731 "raid_level": "raid1", 00:17:56.731 "superblock": true, 00:17:56.731 "num_base_bdevs": 4, 00:17:56.731 "num_base_bdevs_discovered": 3, 00:17:56.731 "num_base_bdevs_operational": 3, 00:17:56.731 "process": { 00:17:56.731 "type": "rebuild", 00:17:56.731 "target": "spare", 00:17:56.731 "progress": { 00:17:56.731 "blocks": 14336, 00:17:56.731 "percent": 22 00:17:56.731 } 00:17:56.731 }, 00:17:56.731 "base_bdevs_list": [ 00:17:56.731 { 00:17:56.731 "name": "spare", 00:17:56.731 "uuid": "c32fefe9-e4e1-58b1-ac97-3438a3cde707", 00:17:56.731 "is_configured": true, 00:17:56.731 "data_offset": 2048, 00:17:56.731 "data_size": 63488 00:17:56.731 }, 00:17:56.731 { 00:17:56.731 "name": null, 00:17:56.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.731 "is_configured": false, 00:17:56.731 "data_offset": 0, 00:17:56.731 "data_size": 63488 00:17:56.731 }, 00:17:56.731 { 00:17:56.731 "name": "BaseBdev3", 00:17:56.731 "uuid": "0ab6fe4e-5837-560b-9578-f334f0479d12", 00:17:56.731 "is_configured": true, 00:17:56.731 "data_offset": 2048, 00:17:56.731 "data_size": 63488 00:17:56.731 }, 00:17:56.731 { 00:17:56.731 "name": "BaseBdev4", 00:17:56.731 "uuid": "81e28c54-84ec-59aa-b66d-aecbcad47ae6", 00:17:56.731 "is_configured": true, 00:17:56.731 "data_offset": 2048, 00:17:56.731 "data_size": 63488 00:17:56.731 } 00:17:56.731 ] 00:17:56.731 }' 00:17:56.731 18:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:56.731 18:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:56.731 18:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:56.990 [2024-12-06 18:16:22.271864] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:17:56.990 [2024-12-06 18:16:22.280524] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:17:56.990 18:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:56.990 18:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=538 00:17:56.990 18:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:56.990 18:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:56.990 18:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:56.990 18:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:56.990 18:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:56.990 18:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:56.990 18:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.990 18:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.990 18:16:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.990 18:16:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:56.990 18:16:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.990 18:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:56.990 "name": "raid_bdev1", 00:17:56.990 "uuid": "1ba8ceff-faf6-4fb8-b7bb-3514db0f2482", 00:17:56.990 "strip_size_kb": 0, 00:17:56.990 "state": "online", 00:17:56.990 "raid_level": "raid1", 00:17:56.990 "superblock": true, 00:17:56.990 "num_base_bdevs": 4, 00:17:56.990 "num_base_bdevs_discovered": 3, 00:17:56.990 "num_base_bdevs_operational": 3, 00:17:56.990 "process": { 00:17:56.990 "type": "rebuild", 00:17:56.990 "target": "spare", 00:17:56.990 "progress": { 00:17:56.990 "blocks": 16384, 00:17:56.990 "percent": 25 00:17:56.990 } 00:17:56.990 }, 00:17:56.990 "base_bdevs_list": [ 00:17:56.990 { 00:17:56.990 "name": "spare", 00:17:56.990 "uuid": "c32fefe9-e4e1-58b1-ac97-3438a3cde707", 00:17:56.990 "is_configured": true, 00:17:56.990 "data_offset": 2048, 00:17:56.990 "data_size": 63488 00:17:56.990 }, 00:17:56.990 { 00:17:56.990 "name": null, 00:17:56.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.990 "is_configured": false, 00:17:56.990 "data_offset": 0, 00:17:56.990 "data_size": 63488 00:17:56.990 }, 00:17:56.990 { 00:17:56.990 "name": "BaseBdev3", 00:17:56.990 "uuid": "0ab6fe4e-5837-560b-9578-f334f0479d12", 00:17:56.990 "is_configured": true, 00:17:56.990 "data_offset": 2048, 00:17:56.990 "data_size": 63488 00:17:56.990 }, 00:17:56.990 { 00:17:56.990 "name": "BaseBdev4", 00:17:56.990 "uuid": "81e28c54-84ec-59aa-b66d-aecbcad47ae6", 00:17:56.990 "is_configured": true, 00:17:56.990 "data_offset": 2048, 00:17:56.990 "data_size": 63488 00:17:56.990 } 00:17:56.990 ] 00:17:56.990 }' 00:17:56.990 18:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:56.990 18:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:56.990 18:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:56.990 18:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:56.990 18:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:57.558 [2024-12-06 18:16:22.805608] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:17:57.558 [2024-12-06 18:16:22.806314] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:17:57.817 97.00 IOPS, 291.00 MiB/s [2024-12-06T18:16:23.337Z] [2024-12-06 18:16:23.139816] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:17:57.817 [2024-12-06 18:16:23.259879] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:17:57.817 [2024-12-06 18:16:23.260502] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:17:58.076 18:16:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:58.076 18:16:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:58.076 18:16:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:58.076 18:16:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:58.076 18:16:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:58.076 18:16:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:58.076 18:16:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.076 18:16:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.076 18:16:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.076 18:16:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:58.076 18:16:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.076 18:16:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:58.077 "name": "raid_bdev1", 00:17:58.077 "uuid": "1ba8ceff-faf6-4fb8-b7bb-3514db0f2482", 00:17:58.077 "strip_size_kb": 0, 00:17:58.077 "state": "online", 00:17:58.077 "raid_level": "raid1", 00:17:58.077 "superblock": true, 00:17:58.077 "num_base_bdevs": 4, 00:17:58.077 "num_base_bdevs_discovered": 3, 00:17:58.077 "num_base_bdevs_operational": 3, 00:17:58.077 "process": { 00:17:58.077 "type": "rebuild", 00:17:58.077 "target": "spare", 00:17:58.077 "progress": { 00:17:58.077 "blocks": 28672, 00:17:58.077 "percent": 45 00:17:58.077 } 00:17:58.077 }, 00:17:58.077 "base_bdevs_list": [ 00:17:58.077 { 00:17:58.077 "name": "spare", 00:17:58.077 "uuid": "c32fefe9-e4e1-58b1-ac97-3438a3cde707", 00:17:58.077 "is_configured": true, 00:17:58.077 "data_offset": 2048, 00:17:58.077 "data_size": 63488 00:17:58.077 }, 00:17:58.077 { 00:17:58.077 "name": null, 00:17:58.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.077 "is_configured": false, 00:17:58.077 "data_offset": 0, 00:17:58.077 "data_size": 63488 00:17:58.077 }, 00:17:58.077 { 00:17:58.077 "name": "BaseBdev3", 00:17:58.077 "uuid": "0ab6fe4e-5837-560b-9578-f334f0479d12", 00:17:58.077 "is_configured": true, 00:17:58.077 "data_offset": 2048, 00:17:58.077 "data_size": 63488 00:17:58.077 }, 00:17:58.077 { 00:17:58.077 "name": "BaseBdev4", 00:17:58.077 "uuid": "81e28c54-84ec-59aa-b66d-aecbcad47ae6", 00:17:58.077 "is_configured": true, 00:17:58.077 "data_offset": 2048, 00:17:58.077 "data_size": 63488 00:17:58.077 } 00:17:58.077 ] 00:17:58.077 }' 00:17:58.077 18:16:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:58.077 18:16:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:58.077 18:16:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:58.424 18:16:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:58.424 18:16:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:58.424 [2024-12-06 18:16:23.714594] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:17:58.697 [2024-12-06 18:16:23.935941] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:17:58.697 88.67 IOPS, 266.00 MiB/s [2024-12-06T18:16:24.217Z] [2024-12-06 18:16:24.046054] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:17:59.266 [2024-12-06 18:16:24.494693] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:17:59.266 18:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:59.266 18:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:59.266 18:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:59.266 18:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:59.266 18:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:59.266 18:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:59.266 18:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.266 18:16:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.266 18:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.266 18:16:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:59.266 18:16:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.266 18:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:59.266 "name": "raid_bdev1", 00:17:59.266 "uuid": "1ba8ceff-faf6-4fb8-b7bb-3514db0f2482", 00:17:59.266 "strip_size_kb": 0, 00:17:59.266 "state": "online", 00:17:59.266 "raid_level": "raid1", 00:17:59.266 "superblock": true, 00:17:59.266 "num_base_bdevs": 4, 00:17:59.266 "num_base_bdevs_discovered": 3, 00:17:59.266 "num_base_bdevs_operational": 3, 00:17:59.266 "process": { 00:17:59.266 "type": "rebuild", 00:17:59.266 "target": "spare", 00:17:59.266 "progress": { 00:17:59.266 "blocks": 49152, 00:17:59.266 "percent": 77 00:17:59.266 } 00:17:59.266 }, 00:17:59.266 "base_bdevs_list": [ 00:17:59.266 { 00:17:59.266 "name": "spare", 00:17:59.266 "uuid": "c32fefe9-e4e1-58b1-ac97-3438a3cde707", 00:17:59.266 "is_configured": true, 00:17:59.266 "data_offset": 2048, 00:17:59.266 "data_size": 63488 00:17:59.266 }, 00:17:59.266 { 00:17:59.266 "name": null, 00:17:59.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.266 "is_configured": false, 00:17:59.266 "data_offset": 0, 00:17:59.266 "data_size": 63488 00:17:59.266 }, 00:17:59.266 { 00:17:59.266 "name": "BaseBdev3", 00:17:59.266 "uuid": "0ab6fe4e-5837-560b-9578-f334f0479d12", 00:17:59.266 "is_configured": true, 00:17:59.266 "data_offset": 2048, 00:17:59.266 "data_size": 63488 00:17:59.266 }, 00:17:59.266 { 00:17:59.266 "name": "BaseBdev4", 00:17:59.266 "uuid": "81e28c54-84ec-59aa-b66d-aecbcad47ae6", 00:17:59.266 "is_configured": true, 00:17:59.266 "data_offset": 2048, 00:17:59.266 "data_size": 63488 00:17:59.266 } 00:17:59.266 ] 00:17:59.266 }' 00:17:59.266 18:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:59.266 18:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:59.266 18:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:59.525 18:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:59.525 18:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:59.525 [2024-12-06 18:16:24.838982] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:17:59.784 81.43 IOPS, 244.29 MiB/s [2024-12-06T18:16:25.304Z] [2024-12-06 18:16:25.060876] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:17:59.784 [2024-12-06 18:16:25.183582] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:18:00.041 [2024-12-06 18:16:25.527580] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:00.297 [2024-12-06 18:16:25.635794] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:00.297 [2024-12-06 18:16:25.640537] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:00.552 18:16:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:00.552 18:16:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:00.552 18:16:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:00.552 18:16:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:00.552 18:16:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:00.552 18:16:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:00.552 18:16:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.552 18:16:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.552 18:16:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.552 18:16:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:00.552 18:16:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.552 18:16:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:00.552 "name": "raid_bdev1", 00:18:00.552 "uuid": "1ba8ceff-faf6-4fb8-b7bb-3514db0f2482", 00:18:00.552 "strip_size_kb": 0, 00:18:00.552 "state": "online", 00:18:00.552 "raid_level": "raid1", 00:18:00.552 "superblock": true, 00:18:00.552 "num_base_bdevs": 4, 00:18:00.552 "num_base_bdevs_discovered": 3, 00:18:00.552 "num_base_bdevs_operational": 3, 00:18:00.552 "base_bdevs_list": [ 00:18:00.552 { 00:18:00.552 "name": "spare", 00:18:00.552 "uuid": "c32fefe9-e4e1-58b1-ac97-3438a3cde707", 00:18:00.552 "is_configured": true, 00:18:00.552 "data_offset": 2048, 00:18:00.552 "data_size": 63488 00:18:00.552 }, 00:18:00.552 { 00:18:00.552 "name": null, 00:18:00.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.552 "is_configured": false, 00:18:00.552 "data_offset": 0, 00:18:00.552 "data_size": 63488 00:18:00.552 }, 00:18:00.552 { 00:18:00.552 "name": "BaseBdev3", 00:18:00.552 "uuid": "0ab6fe4e-5837-560b-9578-f334f0479d12", 00:18:00.552 "is_configured": true, 00:18:00.552 "data_offset": 2048, 00:18:00.552 "data_size": 63488 00:18:00.552 }, 00:18:00.552 { 00:18:00.552 "name": "BaseBdev4", 00:18:00.552 "uuid": "81e28c54-84ec-59aa-b66d-aecbcad47ae6", 00:18:00.552 "is_configured": true, 00:18:00.552 "data_offset": 2048, 00:18:00.552 "data_size": 63488 00:18:00.552 } 00:18:00.552 ] 00:18:00.552 }' 00:18:00.552 18:16:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:00.552 18:16:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:00.552 18:16:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:00.552 18:16:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:00.552 18:16:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:18:00.552 18:16:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:00.552 18:16:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:00.552 18:16:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:00.552 18:16:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:00.552 18:16:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:00.552 18:16:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.552 18:16:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.552 18:16:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.552 18:16:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:00.552 74.50 IOPS, 223.50 MiB/s [2024-12-06T18:16:26.072Z] 18:16:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.552 18:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:00.552 "name": "raid_bdev1", 00:18:00.552 "uuid": "1ba8ceff-faf6-4fb8-b7bb-3514db0f2482", 00:18:00.552 "strip_size_kb": 0, 00:18:00.552 "state": "online", 00:18:00.552 "raid_level": "raid1", 00:18:00.552 "superblock": true, 00:18:00.552 "num_base_bdevs": 4, 00:18:00.553 "num_base_bdevs_discovered": 3, 00:18:00.553 "num_base_bdevs_operational": 3, 00:18:00.553 "base_bdevs_list": [ 00:18:00.553 { 00:18:00.553 "name": "spare", 00:18:00.553 "uuid": "c32fefe9-e4e1-58b1-ac97-3438a3cde707", 00:18:00.553 "is_configured": true, 00:18:00.553 "data_offset": 2048, 00:18:00.553 "data_size": 63488 00:18:00.553 }, 00:18:00.553 { 00:18:00.553 "name": null, 00:18:00.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.553 "is_configured": false, 00:18:00.553 "data_offset": 0, 00:18:00.553 "data_size": 63488 00:18:00.553 }, 00:18:00.553 { 00:18:00.553 "name": "BaseBdev3", 00:18:00.553 "uuid": "0ab6fe4e-5837-560b-9578-f334f0479d12", 00:18:00.553 "is_configured": true, 00:18:00.553 "data_offset": 2048, 00:18:00.553 "data_size": 63488 00:18:00.553 }, 00:18:00.553 { 00:18:00.553 "name": "BaseBdev4", 00:18:00.553 "uuid": "81e28c54-84ec-59aa-b66d-aecbcad47ae6", 00:18:00.553 "is_configured": true, 00:18:00.553 "data_offset": 2048, 00:18:00.553 "data_size": 63488 00:18:00.553 } 00:18:00.553 ] 00:18:00.553 }' 00:18:00.553 18:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:00.810 18:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:00.810 18:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:00.810 18:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:00.810 18:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:00.810 18:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:00.810 18:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:00.810 18:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:00.810 18:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:00.810 18:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:00.810 18:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.810 18:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.810 18:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.810 18:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.810 18:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.810 18:16:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.810 18:16:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:00.810 18:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.810 18:16:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.810 18:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.810 "name": "raid_bdev1", 00:18:00.810 "uuid": "1ba8ceff-faf6-4fb8-b7bb-3514db0f2482", 00:18:00.810 "strip_size_kb": 0, 00:18:00.810 "state": "online", 00:18:00.810 "raid_level": "raid1", 00:18:00.810 "superblock": true, 00:18:00.810 "num_base_bdevs": 4, 00:18:00.810 "num_base_bdevs_discovered": 3, 00:18:00.810 "num_base_bdevs_operational": 3, 00:18:00.810 "base_bdevs_list": [ 00:18:00.810 { 00:18:00.810 "name": "spare", 00:18:00.810 "uuid": "c32fefe9-e4e1-58b1-ac97-3438a3cde707", 00:18:00.810 "is_configured": true, 00:18:00.810 "data_offset": 2048, 00:18:00.810 "data_size": 63488 00:18:00.810 }, 00:18:00.810 { 00:18:00.810 "name": null, 00:18:00.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.810 "is_configured": false, 00:18:00.810 "data_offset": 0, 00:18:00.810 "data_size": 63488 00:18:00.810 }, 00:18:00.810 { 00:18:00.810 "name": "BaseBdev3", 00:18:00.810 "uuid": "0ab6fe4e-5837-560b-9578-f334f0479d12", 00:18:00.810 "is_configured": true, 00:18:00.810 "data_offset": 2048, 00:18:00.810 "data_size": 63488 00:18:00.810 }, 00:18:00.810 { 00:18:00.810 "name": "BaseBdev4", 00:18:00.810 "uuid": "81e28c54-84ec-59aa-b66d-aecbcad47ae6", 00:18:00.810 "is_configured": true, 00:18:00.810 "data_offset": 2048, 00:18:00.810 "data_size": 63488 00:18:00.810 } 00:18:00.810 ] 00:18:00.810 }' 00:18:00.810 18:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.810 18:16:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:01.373 18:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:01.373 18:16:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.373 18:16:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:01.373 [2024-12-06 18:16:26.693466] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:01.373 [2024-12-06 18:16:26.693504] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:01.373 00:18:01.373 Latency(us) 00:18:01.373 [2024-12-06T18:16:26.893Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.373 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:18:01.373 raid_bdev1 : 8.77 70.46 211.38 0.00 0.00 18334.30 294.17 113436.86 00:18:01.373 [2024-12-06T18:16:26.893Z] =================================================================================================================== 00:18:01.373 [2024-12-06T18:16:26.893Z] Total : 70.46 211.38 0.00 0.00 18334.30 294.17 113436.86 00:18:01.373 [2024-12-06 18:16:26.769739] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:01.373 [2024-12-06 18:16:26.769854] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:01.373 { 00:18:01.373 "results": [ 00:18:01.373 { 00:18:01.373 "job": "raid_bdev1", 00:18:01.373 "core_mask": "0x1", 00:18:01.373 "workload": "randrw", 00:18:01.373 "percentage": 50, 00:18:01.373 "status": "finished", 00:18:01.373 "queue_depth": 2, 00:18:01.373 "io_size": 3145728, 00:18:01.373 "runtime": 8.77112, 00:18:01.374 "iops": 70.4585047291566, 00:18:01.374 "mibps": 211.3755141874698, 00:18:01.374 "io_failed": 0, 00:18:01.374 "io_timeout": 0, 00:18:01.374 "avg_latency_us": 18334.30274786702, 00:18:01.374 "min_latency_us": 294.16727272727275, 00:18:01.374 "max_latency_us": 113436.85818181818 00:18:01.374 } 00:18:01.374 ], 00:18:01.374 "core_count": 1 00:18:01.374 } 00:18:01.374 [2024-12-06 18:16:26.769997] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:01.374 [2024-12-06 18:16:26.770014] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:01.374 18:16:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.374 18:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.374 18:16:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.374 18:16:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:01.374 18:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:18:01.374 18:16:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.374 18:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:01.374 18:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:01.374 18:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:18:01.374 18:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:18:01.374 18:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:01.374 18:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:18:01.374 18:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:01.374 18:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:01.374 18:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:01.374 18:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:18:01.374 18:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:01.374 18:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:01.374 18:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:18:01.632 /dev/nbd0 00:18:01.632 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:01.890 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:01.890 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:01.890 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:18:01.890 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:01.890 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:01.890 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:01.890 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:18:01.890 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:01.890 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:01.890 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:01.890 1+0 records in 00:18:01.890 1+0 records out 00:18:01.890 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355765 s, 11.5 MB/s 00:18:01.890 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:01.890 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:18:01.890 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:01.890 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:01.890 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:18:01.890 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:01.890 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:01.890 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:01.890 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:18:01.890 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:18:01.890 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:01.890 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:18:01.890 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:18:01.890 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:01.890 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:18:01.890 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:01.890 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:18:01.890 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:01.890 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:18:01.890 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:01.890 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:01.890 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:18:02.149 /dev/nbd1 00:18:02.149 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:02.149 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:02.149 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:02.149 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:18:02.149 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:02.149 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:02.149 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:02.149 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:18:02.149 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:02.149 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:02.149 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:02.149 1+0 records in 00:18:02.149 1+0 records out 00:18:02.149 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340612 s, 12.0 MB/s 00:18:02.149 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:02.149 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:18:02.149 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:02.149 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:02.149 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:18:02.149 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:02.149 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:02.149 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:02.408 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:18:02.408 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:02.408 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:18:02.408 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:02.408 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:18:02.408 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:02.408 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:02.667 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:02.667 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:02.667 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:02.667 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:02.667 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:02.667 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:02.667 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:18:02.667 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:02.667 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:02.667 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:18:02.667 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:18:02.667 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:02.667 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:18:02.667 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:02.667 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:18:02.667 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:02.667 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:18:02.667 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:02.667 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:02.667 18:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:18:02.926 /dev/nbd1 00:18:02.926 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:02.926 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:02.926 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:02.926 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:18:02.926 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:02.926 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:02.926 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:02.926 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:18:02.926 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:02.926 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:02.926 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:02.926 1+0 records in 00:18:02.926 1+0 records out 00:18:02.926 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328873 s, 12.5 MB/s 00:18:02.926 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:02.926 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:18:02.926 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:02.926 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:02.926 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:18:02.926 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:02.926 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:02.926 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:02.926 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:18:02.926 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:02.926 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:18:02.926 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:02.926 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:18:02.926 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:02.926 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:03.184 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:03.184 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:03.184 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:03.184 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:03.184 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:03.184 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:03.184 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:18:03.184 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:03.184 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:03.184 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:03.184 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:03.184 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:03.184 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:18:03.184 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:03.184 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:03.477 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:03.477 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:03.477 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:03.477 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:03.477 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:03.477 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:03.477 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:18:03.477 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:03.477 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:03.477 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:03.477 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.477 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:03.477 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.477 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:03.477 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.478 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:03.478 [2024-12-06 18:16:28.948600] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:03.478 [2024-12-06 18:16:28.948681] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.478 [2024-12-06 18:16:28.948711] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:18:03.478 [2024-12-06 18:16:28.948725] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.478 [2024-12-06 18:16:28.951831] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.478 [2024-12-06 18:16:28.951889] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:03.478 [2024-12-06 18:16:28.952000] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:03.478 [2024-12-06 18:16:28.952070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:03.478 [2024-12-06 18:16:28.952264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:03.478 [2024-12-06 18:16:28.952390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:03.478 spare 00:18:03.478 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.478 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:03.478 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.478 18:16:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:03.751 [2024-12-06 18:16:29.052519] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:03.751 [2024-12-06 18:16:29.052552] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:03.751 [2024-12-06 18:16:29.053008] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:18:03.751 [2024-12-06 18:16:29.053248] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:03.751 [2024-12-06 18:16:29.053271] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:03.751 [2024-12-06 18:16:29.053506] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:03.751 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.751 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:03.751 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:03.751 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:03.751 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:03.751 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:03.751 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:03.751 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.751 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.751 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.751 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.751 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.751 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.751 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:03.751 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.751 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.751 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.751 "name": "raid_bdev1", 00:18:03.751 "uuid": "1ba8ceff-faf6-4fb8-b7bb-3514db0f2482", 00:18:03.751 "strip_size_kb": 0, 00:18:03.751 "state": "online", 00:18:03.751 "raid_level": "raid1", 00:18:03.751 "superblock": true, 00:18:03.751 "num_base_bdevs": 4, 00:18:03.751 "num_base_bdevs_discovered": 3, 00:18:03.751 "num_base_bdevs_operational": 3, 00:18:03.751 "base_bdevs_list": [ 00:18:03.751 { 00:18:03.751 "name": "spare", 00:18:03.751 "uuid": "c32fefe9-e4e1-58b1-ac97-3438a3cde707", 00:18:03.751 "is_configured": true, 00:18:03.751 "data_offset": 2048, 00:18:03.751 "data_size": 63488 00:18:03.751 }, 00:18:03.751 { 00:18:03.751 "name": null, 00:18:03.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.751 "is_configured": false, 00:18:03.751 "data_offset": 2048, 00:18:03.751 "data_size": 63488 00:18:03.751 }, 00:18:03.751 { 00:18:03.751 "name": "BaseBdev3", 00:18:03.751 "uuid": "0ab6fe4e-5837-560b-9578-f334f0479d12", 00:18:03.751 "is_configured": true, 00:18:03.751 "data_offset": 2048, 00:18:03.751 "data_size": 63488 00:18:03.751 }, 00:18:03.751 { 00:18:03.751 "name": "BaseBdev4", 00:18:03.751 "uuid": "81e28c54-84ec-59aa-b66d-aecbcad47ae6", 00:18:03.751 "is_configured": true, 00:18:03.751 "data_offset": 2048, 00:18:03.751 "data_size": 63488 00:18:03.751 } 00:18:03.751 ] 00:18:03.751 }' 00:18:03.751 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.751 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:04.320 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:04.320 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:04.320 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:04.320 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:04.320 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:04.320 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.320 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.320 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.320 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:04.320 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.320 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:04.320 "name": "raid_bdev1", 00:18:04.320 "uuid": "1ba8ceff-faf6-4fb8-b7bb-3514db0f2482", 00:18:04.320 "strip_size_kb": 0, 00:18:04.320 "state": "online", 00:18:04.320 "raid_level": "raid1", 00:18:04.320 "superblock": true, 00:18:04.320 "num_base_bdevs": 4, 00:18:04.320 "num_base_bdevs_discovered": 3, 00:18:04.320 "num_base_bdevs_operational": 3, 00:18:04.320 "base_bdevs_list": [ 00:18:04.320 { 00:18:04.320 "name": "spare", 00:18:04.320 "uuid": "c32fefe9-e4e1-58b1-ac97-3438a3cde707", 00:18:04.320 "is_configured": true, 00:18:04.320 "data_offset": 2048, 00:18:04.320 "data_size": 63488 00:18:04.320 }, 00:18:04.320 { 00:18:04.320 "name": null, 00:18:04.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.320 "is_configured": false, 00:18:04.320 "data_offset": 2048, 00:18:04.320 "data_size": 63488 00:18:04.320 }, 00:18:04.320 { 00:18:04.320 "name": "BaseBdev3", 00:18:04.320 "uuid": "0ab6fe4e-5837-560b-9578-f334f0479d12", 00:18:04.320 "is_configured": true, 00:18:04.320 "data_offset": 2048, 00:18:04.320 "data_size": 63488 00:18:04.320 }, 00:18:04.320 { 00:18:04.320 "name": "BaseBdev4", 00:18:04.320 "uuid": "81e28c54-84ec-59aa-b66d-aecbcad47ae6", 00:18:04.320 "is_configured": true, 00:18:04.320 "data_offset": 2048, 00:18:04.320 "data_size": 63488 00:18:04.321 } 00:18:04.321 ] 00:18:04.321 }' 00:18:04.321 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:04.321 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:04.321 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:04.321 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:04.321 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:04.321 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.321 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.321 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:04.321 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.321 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:04.321 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:04.321 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.321 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:04.321 [2024-12-06 18:16:29.753887] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:04.321 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.321 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:04.321 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:04.321 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:04.321 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:04.321 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:04.321 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:04.321 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.321 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.321 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.321 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.321 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.321 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.321 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:04.321 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.321 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.321 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.321 "name": "raid_bdev1", 00:18:04.321 "uuid": "1ba8ceff-faf6-4fb8-b7bb-3514db0f2482", 00:18:04.321 "strip_size_kb": 0, 00:18:04.321 "state": "online", 00:18:04.321 "raid_level": "raid1", 00:18:04.321 "superblock": true, 00:18:04.321 "num_base_bdevs": 4, 00:18:04.321 "num_base_bdevs_discovered": 2, 00:18:04.321 "num_base_bdevs_operational": 2, 00:18:04.321 "base_bdevs_list": [ 00:18:04.321 { 00:18:04.321 "name": null, 00:18:04.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.321 "is_configured": false, 00:18:04.321 "data_offset": 0, 00:18:04.321 "data_size": 63488 00:18:04.321 }, 00:18:04.321 { 00:18:04.321 "name": null, 00:18:04.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.321 "is_configured": false, 00:18:04.321 "data_offset": 2048, 00:18:04.321 "data_size": 63488 00:18:04.321 }, 00:18:04.321 { 00:18:04.321 "name": "BaseBdev3", 00:18:04.321 "uuid": "0ab6fe4e-5837-560b-9578-f334f0479d12", 00:18:04.321 "is_configured": true, 00:18:04.321 "data_offset": 2048, 00:18:04.321 "data_size": 63488 00:18:04.321 }, 00:18:04.321 { 00:18:04.321 "name": "BaseBdev4", 00:18:04.321 "uuid": "81e28c54-84ec-59aa-b66d-aecbcad47ae6", 00:18:04.321 "is_configured": true, 00:18:04.321 "data_offset": 2048, 00:18:04.321 "data_size": 63488 00:18:04.321 } 00:18:04.321 ] 00:18:04.321 }' 00:18:04.321 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.321 18:16:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:04.890 18:16:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:04.890 18:16:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.890 18:16:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:04.890 [2024-12-06 18:16:30.314255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:04.890 [2024-12-06 18:16:30.314546] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:18:04.890 [2024-12-06 18:16:30.314567] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:04.890 [2024-12-06 18:16:30.314622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:04.890 [2024-12-06 18:16:30.328540] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:18:04.890 18:16:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.890 18:16:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:04.890 [2024-12-06 18:16:30.331054] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:05.827 18:16:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:05.827 18:16:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:05.827 18:16:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:05.827 18:16:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:05.827 18:16:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:05.827 18:16:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.827 18:16:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.827 18:16:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.827 18:16:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:06.086 18:16:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.086 18:16:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:06.086 "name": "raid_bdev1", 00:18:06.086 "uuid": "1ba8ceff-faf6-4fb8-b7bb-3514db0f2482", 00:18:06.086 "strip_size_kb": 0, 00:18:06.086 "state": "online", 00:18:06.086 "raid_level": "raid1", 00:18:06.086 "superblock": true, 00:18:06.086 "num_base_bdevs": 4, 00:18:06.086 "num_base_bdevs_discovered": 3, 00:18:06.086 "num_base_bdevs_operational": 3, 00:18:06.086 "process": { 00:18:06.086 "type": "rebuild", 00:18:06.086 "target": "spare", 00:18:06.086 "progress": { 00:18:06.086 "blocks": 20480, 00:18:06.086 "percent": 32 00:18:06.086 } 00:18:06.086 }, 00:18:06.086 "base_bdevs_list": [ 00:18:06.086 { 00:18:06.086 "name": "spare", 00:18:06.086 "uuid": "c32fefe9-e4e1-58b1-ac97-3438a3cde707", 00:18:06.086 "is_configured": true, 00:18:06.086 "data_offset": 2048, 00:18:06.086 "data_size": 63488 00:18:06.086 }, 00:18:06.086 { 00:18:06.086 "name": null, 00:18:06.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.086 "is_configured": false, 00:18:06.086 "data_offset": 2048, 00:18:06.086 "data_size": 63488 00:18:06.086 }, 00:18:06.086 { 00:18:06.086 "name": "BaseBdev3", 00:18:06.086 "uuid": "0ab6fe4e-5837-560b-9578-f334f0479d12", 00:18:06.086 "is_configured": true, 00:18:06.086 "data_offset": 2048, 00:18:06.086 "data_size": 63488 00:18:06.086 }, 00:18:06.086 { 00:18:06.086 "name": "BaseBdev4", 00:18:06.086 "uuid": "81e28c54-84ec-59aa-b66d-aecbcad47ae6", 00:18:06.086 "is_configured": true, 00:18:06.086 "data_offset": 2048, 00:18:06.086 "data_size": 63488 00:18:06.086 } 00:18:06.086 ] 00:18:06.086 }' 00:18:06.086 18:16:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:06.086 18:16:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:06.086 18:16:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:06.086 18:16:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:06.086 18:16:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:06.086 18:16:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.086 18:16:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:06.086 [2024-12-06 18:16:31.520515] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:06.086 [2024-12-06 18:16:31.540403] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:06.086 [2024-12-06 18:16:31.540508] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:06.086 [2024-12-06 18:16:31.540539] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:06.087 [2024-12-06 18:16:31.540552] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:06.087 18:16:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.087 18:16:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:06.087 18:16:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:06.087 18:16:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.087 18:16:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:06.087 18:16:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:06.087 18:16:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:06.087 18:16:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.087 18:16:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.087 18:16:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.087 18:16:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.087 18:16:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.087 18:16:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.087 18:16:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.087 18:16:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:06.087 18:16:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.346 18:16:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.346 "name": "raid_bdev1", 00:18:06.346 "uuid": "1ba8ceff-faf6-4fb8-b7bb-3514db0f2482", 00:18:06.346 "strip_size_kb": 0, 00:18:06.346 "state": "online", 00:18:06.346 "raid_level": "raid1", 00:18:06.346 "superblock": true, 00:18:06.346 "num_base_bdevs": 4, 00:18:06.346 "num_base_bdevs_discovered": 2, 00:18:06.346 "num_base_bdevs_operational": 2, 00:18:06.346 "base_bdevs_list": [ 00:18:06.346 { 00:18:06.346 "name": null, 00:18:06.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.346 "is_configured": false, 00:18:06.346 "data_offset": 0, 00:18:06.346 "data_size": 63488 00:18:06.346 }, 00:18:06.346 { 00:18:06.346 "name": null, 00:18:06.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.346 "is_configured": false, 00:18:06.346 "data_offset": 2048, 00:18:06.346 "data_size": 63488 00:18:06.346 }, 00:18:06.346 { 00:18:06.346 "name": "BaseBdev3", 00:18:06.346 "uuid": "0ab6fe4e-5837-560b-9578-f334f0479d12", 00:18:06.346 "is_configured": true, 00:18:06.346 "data_offset": 2048, 00:18:06.346 "data_size": 63488 00:18:06.346 }, 00:18:06.346 { 00:18:06.346 "name": "BaseBdev4", 00:18:06.346 "uuid": "81e28c54-84ec-59aa-b66d-aecbcad47ae6", 00:18:06.346 "is_configured": true, 00:18:06.346 "data_offset": 2048, 00:18:06.347 "data_size": 63488 00:18:06.347 } 00:18:06.347 ] 00:18:06.347 }' 00:18:06.347 18:16:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.347 18:16:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:06.606 18:16:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:06.606 18:16:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.606 18:16:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:06.606 [2024-12-06 18:16:32.095602] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:06.606 [2024-12-06 18:16:32.095709] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.606 [2024-12-06 18:16:32.095751] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:18:06.606 [2024-12-06 18:16:32.095783] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.606 [2024-12-06 18:16:32.096403] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.606 [2024-12-06 18:16:32.096448] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:06.606 [2024-12-06 18:16:32.096568] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:06.606 [2024-12-06 18:16:32.096587] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:18:06.606 [2024-12-06 18:16:32.096609] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:06.606 [2024-12-06 18:16:32.096637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:06.606 [2024-12-06 18:16:32.111170] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:18:06.606 spare 00:18:06.606 18:16:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.606 18:16:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:06.606 [2024-12-06 18:16:32.113693] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:07.988 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:07.988 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:07.988 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:07.988 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:07.988 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:07.988 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.988 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.988 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.988 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:07.988 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.988 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:07.988 "name": "raid_bdev1", 00:18:07.988 "uuid": "1ba8ceff-faf6-4fb8-b7bb-3514db0f2482", 00:18:07.988 "strip_size_kb": 0, 00:18:07.988 "state": "online", 00:18:07.988 "raid_level": "raid1", 00:18:07.988 "superblock": true, 00:18:07.988 "num_base_bdevs": 4, 00:18:07.988 "num_base_bdevs_discovered": 3, 00:18:07.988 "num_base_bdevs_operational": 3, 00:18:07.988 "process": { 00:18:07.988 "type": "rebuild", 00:18:07.988 "target": "spare", 00:18:07.988 "progress": { 00:18:07.988 "blocks": 20480, 00:18:07.988 "percent": 32 00:18:07.988 } 00:18:07.988 }, 00:18:07.988 "base_bdevs_list": [ 00:18:07.988 { 00:18:07.988 "name": "spare", 00:18:07.988 "uuid": "c32fefe9-e4e1-58b1-ac97-3438a3cde707", 00:18:07.988 "is_configured": true, 00:18:07.988 "data_offset": 2048, 00:18:07.988 "data_size": 63488 00:18:07.988 }, 00:18:07.988 { 00:18:07.988 "name": null, 00:18:07.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.988 "is_configured": false, 00:18:07.988 "data_offset": 2048, 00:18:07.988 "data_size": 63488 00:18:07.988 }, 00:18:07.988 { 00:18:07.988 "name": "BaseBdev3", 00:18:07.988 "uuid": "0ab6fe4e-5837-560b-9578-f334f0479d12", 00:18:07.988 "is_configured": true, 00:18:07.988 "data_offset": 2048, 00:18:07.988 "data_size": 63488 00:18:07.988 }, 00:18:07.988 { 00:18:07.988 "name": "BaseBdev4", 00:18:07.988 "uuid": "81e28c54-84ec-59aa-b66d-aecbcad47ae6", 00:18:07.988 "is_configured": true, 00:18:07.988 "data_offset": 2048, 00:18:07.988 "data_size": 63488 00:18:07.988 } 00:18:07.988 ] 00:18:07.988 }' 00:18:07.988 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:07.988 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:07.988 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:07.988 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:07.988 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:07.988 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.988 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:07.988 [2024-12-06 18:16:33.287841] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:07.988 [2024-12-06 18:16:33.323307] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:07.988 [2024-12-06 18:16:33.323519] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:07.988 [2024-12-06 18:16:33.323549] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:07.988 [2024-12-06 18:16:33.323564] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:07.988 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.988 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:07.988 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:07.988 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:07.988 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:07.988 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:07.988 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:07.988 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.988 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.988 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.988 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.988 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.988 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.988 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.988 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:07.988 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.988 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.988 "name": "raid_bdev1", 00:18:07.988 "uuid": "1ba8ceff-faf6-4fb8-b7bb-3514db0f2482", 00:18:07.988 "strip_size_kb": 0, 00:18:07.988 "state": "online", 00:18:07.988 "raid_level": "raid1", 00:18:07.988 "superblock": true, 00:18:07.988 "num_base_bdevs": 4, 00:18:07.988 "num_base_bdevs_discovered": 2, 00:18:07.988 "num_base_bdevs_operational": 2, 00:18:07.988 "base_bdevs_list": [ 00:18:07.988 { 00:18:07.988 "name": null, 00:18:07.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.988 "is_configured": false, 00:18:07.988 "data_offset": 0, 00:18:07.988 "data_size": 63488 00:18:07.988 }, 00:18:07.988 { 00:18:07.988 "name": null, 00:18:07.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.988 "is_configured": false, 00:18:07.988 "data_offset": 2048, 00:18:07.988 "data_size": 63488 00:18:07.988 }, 00:18:07.988 { 00:18:07.988 "name": "BaseBdev3", 00:18:07.988 "uuid": "0ab6fe4e-5837-560b-9578-f334f0479d12", 00:18:07.988 "is_configured": true, 00:18:07.988 "data_offset": 2048, 00:18:07.988 "data_size": 63488 00:18:07.988 }, 00:18:07.988 { 00:18:07.988 "name": "BaseBdev4", 00:18:07.988 "uuid": "81e28c54-84ec-59aa-b66d-aecbcad47ae6", 00:18:07.988 "is_configured": true, 00:18:07.988 "data_offset": 2048, 00:18:07.988 "data_size": 63488 00:18:07.988 } 00:18:07.988 ] 00:18:07.988 }' 00:18:07.988 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.988 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:08.557 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:08.557 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:08.557 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:08.557 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:08.557 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:08.557 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.557 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.557 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.557 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:08.557 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.557 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:08.557 "name": "raid_bdev1", 00:18:08.557 "uuid": "1ba8ceff-faf6-4fb8-b7bb-3514db0f2482", 00:18:08.557 "strip_size_kb": 0, 00:18:08.557 "state": "online", 00:18:08.557 "raid_level": "raid1", 00:18:08.557 "superblock": true, 00:18:08.557 "num_base_bdevs": 4, 00:18:08.557 "num_base_bdevs_discovered": 2, 00:18:08.557 "num_base_bdevs_operational": 2, 00:18:08.557 "base_bdevs_list": [ 00:18:08.557 { 00:18:08.557 "name": null, 00:18:08.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.557 "is_configured": false, 00:18:08.557 "data_offset": 0, 00:18:08.557 "data_size": 63488 00:18:08.557 }, 00:18:08.557 { 00:18:08.557 "name": null, 00:18:08.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.557 "is_configured": false, 00:18:08.557 "data_offset": 2048, 00:18:08.557 "data_size": 63488 00:18:08.557 }, 00:18:08.557 { 00:18:08.557 "name": "BaseBdev3", 00:18:08.557 "uuid": "0ab6fe4e-5837-560b-9578-f334f0479d12", 00:18:08.557 "is_configured": true, 00:18:08.557 "data_offset": 2048, 00:18:08.557 "data_size": 63488 00:18:08.557 }, 00:18:08.557 { 00:18:08.557 "name": "BaseBdev4", 00:18:08.557 "uuid": "81e28c54-84ec-59aa-b66d-aecbcad47ae6", 00:18:08.557 "is_configured": true, 00:18:08.557 "data_offset": 2048, 00:18:08.557 "data_size": 63488 00:18:08.557 } 00:18:08.557 ] 00:18:08.557 }' 00:18:08.557 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:08.557 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:08.557 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:08.557 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:08.557 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:08.557 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.557 18:16:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:08.557 18:16:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.557 18:16:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:08.557 18:16:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.557 18:16:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:08.557 [2024-12-06 18:16:34.007996] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:08.557 [2024-12-06 18:16:34.008065] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:08.557 [2024-12-06 18:16:34.008094] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:18:08.557 [2024-12-06 18:16:34.008111] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:08.557 [2024-12-06 18:16:34.008672] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:08.557 [2024-12-06 18:16:34.008714] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:08.557 [2024-12-06 18:16:34.008828] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:08.557 [2024-12-06 18:16:34.008860] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:18:08.557 [2024-12-06 18:16:34.008872] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:08.557 [2024-12-06 18:16:34.008887] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:08.557 BaseBdev1 00:18:08.557 18:16:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.557 18:16:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:09.514 18:16:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:09.514 18:16:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:09.514 18:16:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:09.514 18:16:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:09.514 18:16:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:09.514 18:16:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:09.514 18:16:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.514 18:16:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.515 18:16:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.515 18:16:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.515 18:16:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.515 18:16:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.515 18:16:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:09.515 18:16:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.772 18:16:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.772 18:16:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.772 "name": "raid_bdev1", 00:18:09.772 "uuid": "1ba8ceff-faf6-4fb8-b7bb-3514db0f2482", 00:18:09.772 "strip_size_kb": 0, 00:18:09.772 "state": "online", 00:18:09.773 "raid_level": "raid1", 00:18:09.773 "superblock": true, 00:18:09.773 "num_base_bdevs": 4, 00:18:09.773 "num_base_bdevs_discovered": 2, 00:18:09.773 "num_base_bdevs_operational": 2, 00:18:09.773 "base_bdevs_list": [ 00:18:09.773 { 00:18:09.773 "name": null, 00:18:09.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.773 "is_configured": false, 00:18:09.773 "data_offset": 0, 00:18:09.773 "data_size": 63488 00:18:09.773 }, 00:18:09.773 { 00:18:09.773 "name": null, 00:18:09.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.773 "is_configured": false, 00:18:09.773 "data_offset": 2048, 00:18:09.773 "data_size": 63488 00:18:09.773 }, 00:18:09.773 { 00:18:09.773 "name": "BaseBdev3", 00:18:09.773 "uuid": "0ab6fe4e-5837-560b-9578-f334f0479d12", 00:18:09.773 "is_configured": true, 00:18:09.773 "data_offset": 2048, 00:18:09.773 "data_size": 63488 00:18:09.773 }, 00:18:09.773 { 00:18:09.773 "name": "BaseBdev4", 00:18:09.773 "uuid": "81e28c54-84ec-59aa-b66d-aecbcad47ae6", 00:18:09.773 "is_configured": true, 00:18:09.773 "data_offset": 2048, 00:18:09.773 "data_size": 63488 00:18:09.773 } 00:18:09.773 ] 00:18:09.773 }' 00:18:09.773 18:16:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.773 18:16:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:10.031 18:16:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:10.031 18:16:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:10.031 18:16:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:10.031 18:16:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:10.031 18:16:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:10.031 18:16:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.031 18:16:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.031 18:16:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:10.031 18:16:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.031 18:16:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.291 18:16:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:10.291 "name": "raid_bdev1", 00:18:10.291 "uuid": "1ba8ceff-faf6-4fb8-b7bb-3514db0f2482", 00:18:10.291 "strip_size_kb": 0, 00:18:10.291 "state": "online", 00:18:10.291 "raid_level": "raid1", 00:18:10.291 "superblock": true, 00:18:10.291 "num_base_bdevs": 4, 00:18:10.291 "num_base_bdevs_discovered": 2, 00:18:10.291 "num_base_bdevs_operational": 2, 00:18:10.291 "base_bdevs_list": [ 00:18:10.291 { 00:18:10.291 "name": null, 00:18:10.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.291 "is_configured": false, 00:18:10.291 "data_offset": 0, 00:18:10.291 "data_size": 63488 00:18:10.291 }, 00:18:10.291 { 00:18:10.291 "name": null, 00:18:10.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.291 "is_configured": false, 00:18:10.291 "data_offset": 2048, 00:18:10.291 "data_size": 63488 00:18:10.291 }, 00:18:10.291 { 00:18:10.291 "name": "BaseBdev3", 00:18:10.291 "uuid": "0ab6fe4e-5837-560b-9578-f334f0479d12", 00:18:10.291 "is_configured": true, 00:18:10.291 "data_offset": 2048, 00:18:10.291 "data_size": 63488 00:18:10.291 }, 00:18:10.291 { 00:18:10.291 "name": "BaseBdev4", 00:18:10.291 "uuid": "81e28c54-84ec-59aa-b66d-aecbcad47ae6", 00:18:10.291 "is_configured": true, 00:18:10.291 "data_offset": 2048, 00:18:10.291 "data_size": 63488 00:18:10.291 } 00:18:10.291 ] 00:18:10.291 }' 00:18:10.291 18:16:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:10.292 18:16:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:10.292 18:16:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:10.292 18:16:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:10.292 18:16:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:10.292 18:16:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:18:10.292 18:16:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:10.292 18:16:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:10.292 18:16:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:10.292 18:16:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:10.292 18:16:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:10.292 18:16:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:10.292 18:16:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.292 18:16:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:10.292 [2024-12-06 18:16:35.680949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:10.292 [2024-12-06 18:16:35.681157] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:18:10.292 [2024-12-06 18:16:35.681177] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:10.292 request: 00:18:10.292 { 00:18:10.292 "base_bdev": "BaseBdev1", 00:18:10.292 "raid_bdev": "raid_bdev1", 00:18:10.292 "method": "bdev_raid_add_base_bdev", 00:18:10.292 "req_id": 1 00:18:10.292 } 00:18:10.292 Got JSON-RPC error response 00:18:10.292 response: 00:18:10.292 { 00:18:10.292 "code": -22, 00:18:10.292 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:10.292 } 00:18:10.292 18:16:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:10.292 18:16:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:18:10.292 18:16:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:10.292 18:16:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:10.292 18:16:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:10.292 18:16:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:11.228 18:16:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:11.228 18:16:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:11.228 18:16:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:11.228 18:16:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:11.228 18:16:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:11.228 18:16:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:11.228 18:16:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.228 18:16:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.228 18:16:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.228 18:16:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.228 18:16:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.228 18:16:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.228 18:16:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.228 18:16:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:11.228 18:16:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.487 18:16:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.487 "name": "raid_bdev1", 00:18:11.487 "uuid": "1ba8ceff-faf6-4fb8-b7bb-3514db0f2482", 00:18:11.487 "strip_size_kb": 0, 00:18:11.487 "state": "online", 00:18:11.487 "raid_level": "raid1", 00:18:11.487 "superblock": true, 00:18:11.487 "num_base_bdevs": 4, 00:18:11.487 "num_base_bdevs_discovered": 2, 00:18:11.487 "num_base_bdevs_operational": 2, 00:18:11.487 "base_bdevs_list": [ 00:18:11.487 { 00:18:11.487 "name": null, 00:18:11.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.487 "is_configured": false, 00:18:11.487 "data_offset": 0, 00:18:11.487 "data_size": 63488 00:18:11.487 }, 00:18:11.487 { 00:18:11.487 "name": null, 00:18:11.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.487 "is_configured": false, 00:18:11.487 "data_offset": 2048, 00:18:11.488 "data_size": 63488 00:18:11.488 }, 00:18:11.488 { 00:18:11.488 "name": "BaseBdev3", 00:18:11.488 "uuid": "0ab6fe4e-5837-560b-9578-f334f0479d12", 00:18:11.488 "is_configured": true, 00:18:11.488 "data_offset": 2048, 00:18:11.488 "data_size": 63488 00:18:11.488 }, 00:18:11.488 { 00:18:11.488 "name": "BaseBdev4", 00:18:11.488 "uuid": "81e28c54-84ec-59aa-b66d-aecbcad47ae6", 00:18:11.488 "is_configured": true, 00:18:11.488 "data_offset": 2048, 00:18:11.488 "data_size": 63488 00:18:11.488 } 00:18:11.488 ] 00:18:11.488 }' 00:18:11.488 18:16:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.488 18:16:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:12.055 18:16:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:12.056 18:16:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:12.056 18:16:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:12.056 18:16:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:12.056 18:16:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:12.056 18:16:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.056 18:16:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.056 18:16:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.056 18:16:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:12.056 18:16:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.056 18:16:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:12.056 "name": "raid_bdev1", 00:18:12.056 "uuid": "1ba8ceff-faf6-4fb8-b7bb-3514db0f2482", 00:18:12.056 "strip_size_kb": 0, 00:18:12.056 "state": "online", 00:18:12.056 "raid_level": "raid1", 00:18:12.056 "superblock": true, 00:18:12.056 "num_base_bdevs": 4, 00:18:12.056 "num_base_bdevs_discovered": 2, 00:18:12.056 "num_base_bdevs_operational": 2, 00:18:12.056 "base_bdevs_list": [ 00:18:12.056 { 00:18:12.056 "name": null, 00:18:12.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.056 "is_configured": false, 00:18:12.056 "data_offset": 0, 00:18:12.056 "data_size": 63488 00:18:12.056 }, 00:18:12.056 { 00:18:12.056 "name": null, 00:18:12.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.056 "is_configured": false, 00:18:12.056 "data_offset": 2048, 00:18:12.056 "data_size": 63488 00:18:12.056 }, 00:18:12.056 { 00:18:12.056 "name": "BaseBdev3", 00:18:12.056 "uuid": "0ab6fe4e-5837-560b-9578-f334f0479d12", 00:18:12.056 "is_configured": true, 00:18:12.056 "data_offset": 2048, 00:18:12.056 "data_size": 63488 00:18:12.056 }, 00:18:12.056 { 00:18:12.056 "name": "BaseBdev4", 00:18:12.056 "uuid": "81e28c54-84ec-59aa-b66d-aecbcad47ae6", 00:18:12.056 "is_configured": true, 00:18:12.056 "data_offset": 2048, 00:18:12.056 "data_size": 63488 00:18:12.056 } 00:18:12.056 ] 00:18:12.056 }' 00:18:12.056 18:16:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:12.056 18:16:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:12.056 18:16:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:12.056 18:16:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:12.056 18:16:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79535 00:18:12.056 18:16:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79535 ']' 00:18:12.056 18:16:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79535 00:18:12.056 18:16:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:18:12.056 18:16:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:12.056 18:16:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79535 00:18:12.056 killing process with pid 79535 00:18:12.056 Received shutdown signal, test time was about 19.490861 seconds 00:18:12.056 00:18:12.056 Latency(us) 00:18:12.056 [2024-12-06T18:16:37.576Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.056 [2024-12-06T18:16:37.576Z] =================================================================================================================== 00:18:12.056 [2024-12-06T18:16:37.576Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:12.056 18:16:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:12.056 18:16:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:12.056 18:16:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79535' 00:18:12.056 18:16:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79535 00:18:12.056 [2024-12-06 18:16:37.469150] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:12.056 18:16:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79535 00:18:12.056 [2024-12-06 18:16:37.469297] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:12.056 [2024-12-06 18:16:37.469392] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:12.056 [2024-12-06 18:16:37.469409] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:12.623 [2024-12-06 18:16:37.852793] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:13.560 18:16:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:18:13.560 00:18:13.560 real 0m23.155s 00:18:13.560 user 0m31.569s 00:18:13.560 sys 0m2.359s 00:18:13.560 18:16:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:13.560 18:16:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:13.560 ************************************ 00:18:13.560 END TEST raid_rebuild_test_sb_io 00:18:13.560 ************************************ 00:18:13.560 18:16:39 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:18:13.560 18:16:39 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:18:13.560 18:16:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:13.560 18:16:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:13.560 18:16:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:13.560 ************************************ 00:18:13.560 START TEST raid5f_state_function_test 00:18:13.560 ************************************ 00:18:13.560 18:16:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:18:13.560 18:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:18:13.560 18:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:18:13.560 18:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:18:13.560 18:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:13.560 18:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:13.560 18:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:13.560 18:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:13.560 18:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:13.560 18:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:13.560 18:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:13.560 18:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:13.560 18:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:13.560 18:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:18:13.560 18:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:13.560 18:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:13.560 18:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:13.560 18:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:13.560 18:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:13.560 18:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:13.560 18:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:13.561 18:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:13.561 18:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:18:13.561 18:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:18:13.561 18:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:18:13.561 18:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:18:13.561 18:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:18:13.561 18:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80275 00:18:13.561 18:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80275' 00:18:13.561 Process raid pid: 80275 00:18:13.561 18:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:13.561 18:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80275 00:18:13.561 18:16:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80275 ']' 00:18:13.561 18:16:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.561 18:16:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:13.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.561 18:16:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.561 18:16:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:13.561 18:16:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.820 [2024-12-06 18:16:39.133416] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:18:13.820 [2024-12-06 18:16:39.133616] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:13.820 [2024-12-06 18:16:39.322211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.079 [2024-12-06 18:16:39.455191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.339 [2024-12-06 18:16:39.663386] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:14.339 [2024-12-06 18:16:39.663446] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:14.599 18:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:14.599 18:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:18:14.599 18:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:14.599 18:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.599 18:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.599 [2024-12-06 18:16:40.055917] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:14.599 [2024-12-06 18:16:40.055986] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:14.599 [2024-12-06 18:16:40.056004] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:14.599 [2024-12-06 18:16:40.056020] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:14.599 [2024-12-06 18:16:40.056037] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:14.599 [2024-12-06 18:16:40.056052] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:14.599 18:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.599 18:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:14.599 18:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:14.599 18:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:14.599 18:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:14.599 18:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:14.599 18:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:14.599 18:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.599 18:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.599 18:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.599 18:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.599 18:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.599 18:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:14.599 18:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.599 18:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.599 18:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.858 18:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.858 "name": "Existed_Raid", 00:18:14.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.858 "strip_size_kb": 64, 00:18:14.858 "state": "configuring", 00:18:14.858 "raid_level": "raid5f", 00:18:14.859 "superblock": false, 00:18:14.859 "num_base_bdevs": 3, 00:18:14.859 "num_base_bdevs_discovered": 0, 00:18:14.859 "num_base_bdevs_operational": 3, 00:18:14.859 "base_bdevs_list": [ 00:18:14.859 { 00:18:14.859 "name": "BaseBdev1", 00:18:14.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.859 "is_configured": false, 00:18:14.859 "data_offset": 0, 00:18:14.859 "data_size": 0 00:18:14.859 }, 00:18:14.859 { 00:18:14.859 "name": "BaseBdev2", 00:18:14.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.859 "is_configured": false, 00:18:14.859 "data_offset": 0, 00:18:14.859 "data_size": 0 00:18:14.859 }, 00:18:14.859 { 00:18:14.859 "name": "BaseBdev3", 00:18:14.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.859 "is_configured": false, 00:18:14.859 "data_offset": 0, 00:18:14.859 "data_size": 0 00:18:14.859 } 00:18:14.859 ] 00:18:14.859 }' 00:18:14.859 18:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.859 18:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.118 18:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:15.118 18:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.118 18:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.118 [2024-12-06 18:16:40.599997] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:15.118 [2024-12-06 18:16:40.600048] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:15.118 18:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.118 18:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:15.118 18:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.118 18:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.118 [2024-12-06 18:16:40.607991] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:15.118 [2024-12-06 18:16:40.608046] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:15.118 [2024-12-06 18:16:40.608061] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:15.118 [2024-12-06 18:16:40.608076] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:15.118 [2024-12-06 18:16:40.608085] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:15.118 [2024-12-06 18:16:40.608099] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:15.118 18:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.118 18:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:15.118 18:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.118 18:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.377 [2024-12-06 18:16:40.652432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:15.377 BaseBdev1 00:18:15.377 18:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.377 18:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:15.377 18:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:15.377 18:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:15.377 18:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:15.377 18:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:15.377 18:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:15.377 18:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:15.377 18:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.377 18:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.377 18:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.377 18:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:15.377 18:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.377 18:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.377 [ 00:18:15.377 { 00:18:15.377 "name": "BaseBdev1", 00:18:15.377 "aliases": [ 00:18:15.377 "f6d6a2d0-1338-47db-9477-2d2b96b6ef72" 00:18:15.377 ], 00:18:15.377 "product_name": "Malloc disk", 00:18:15.377 "block_size": 512, 00:18:15.378 "num_blocks": 65536, 00:18:15.378 "uuid": "f6d6a2d0-1338-47db-9477-2d2b96b6ef72", 00:18:15.378 "assigned_rate_limits": { 00:18:15.378 "rw_ios_per_sec": 0, 00:18:15.378 "rw_mbytes_per_sec": 0, 00:18:15.378 "r_mbytes_per_sec": 0, 00:18:15.378 "w_mbytes_per_sec": 0 00:18:15.378 }, 00:18:15.378 "claimed": true, 00:18:15.378 "claim_type": "exclusive_write", 00:18:15.378 "zoned": false, 00:18:15.378 "supported_io_types": { 00:18:15.378 "read": true, 00:18:15.378 "write": true, 00:18:15.378 "unmap": true, 00:18:15.378 "flush": true, 00:18:15.378 "reset": true, 00:18:15.378 "nvme_admin": false, 00:18:15.378 "nvme_io": false, 00:18:15.378 "nvme_io_md": false, 00:18:15.378 "write_zeroes": true, 00:18:15.378 "zcopy": true, 00:18:15.378 "get_zone_info": false, 00:18:15.378 "zone_management": false, 00:18:15.378 "zone_append": false, 00:18:15.378 "compare": false, 00:18:15.378 "compare_and_write": false, 00:18:15.378 "abort": true, 00:18:15.378 "seek_hole": false, 00:18:15.378 "seek_data": false, 00:18:15.378 "copy": true, 00:18:15.378 "nvme_iov_md": false 00:18:15.378 }, 00:18:15.378 "memory_domains": [ 00:18:15.378 { 00:18:15.378 "dma_device_id": "system", 00:18:15.378 "dma_device_type": 1 00:18:15.378 }, 00:18:15.378 { 00:18:15.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.378 "dma_device_type": 2 00:18:15.378 } 00:18:15.378 ], 00:18:15.378 "driver_specific": {} 00:18:15.378 } 00:18:15.378 ] 00:18:15.378 18:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.378 18:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:15.378 18:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:15.378 18:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:15.378 18:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:15.378 18:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:15.378 18:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:15.378 18:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:15.378 18:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.378 18:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.378 18:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.378 18:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:15.378 18:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.378 18:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.378 18:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.378 18:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:15.378 18:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.378 18:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:15.378 "name": "Existed_Raid", 00:18:15.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.378 "strip_size_kb": 64, 00:18:15.378 "state": "configuring", 00:18:15.378 "raid_level": "raid5f", 00:18:15.378 "superblock": false, 00:18:15.378 "num_base_bdevs": 3, 00:18:15.378 "num_base_bdevs_discovered": 1, 00:18:15.378 "num_base_bdevs_operational": 3, 00:18:15.378 "base_bdevs_list": [ 00:18:15.378 { 00:18:15.378 "name": "BaseBdev1", 00:18:15.378 "uuid": "f6d6a2d0-1338-47db-9477-2d2b96b6ef72", 00:18:15.378 "is_configured": true, 00:18:15.378 "data_offset": 0, 00:18:15.378 "data_size": 65536 00:18:15.378 }, 00:18:15.378 { 00:18:15.378 "name": "BaseBdev2", 00:18:15.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.378 "is_configured": false, 00:18:15.378 "data_offset": 0, 00:18:15.378 "data_size": 0 00:18:15.378 }, 00:18:15.378 { 00:18:15.378 "name": "BaseBdev3", 00:18:15.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.378 "is_configured": false, 00:18:15.378 "data_offset": 0, 00:18:15.378 "data_size": 0 00:18:15.378 } 00:18:15.378 ] 00:18:15.378 }' 00:18:15.378 18:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:15.378 18:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.946 18:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:15.946 18:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.946 18:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.946 [2024-12-06 18:16:41.208655] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:15.946 [2024-12-06 18:16:41.208724] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:15.946 18:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.946 18:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:15.946 18:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.946 18:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.946 [2024-12-06 18:16:41.216713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:15.946 [2024-12-06 18:16:41.219212] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:15.946 [2024-12-06 18:16:41.219273] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:15.946 [2024-12-06 18:16:41.219291] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:15.946 [2024-12-06 18:16:41.219307] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:15.946 18:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.946 18:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:15.946 18:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:15.946 18:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:15.946 18:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:15.946 18:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:15.946 18:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:15.946 18:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:15.946 18:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:15.946 18:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.946 18:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.946 18:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.946 18:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:15.946 18:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.946 18:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:15.946 18:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.946 18:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.946 18:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.946 18:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:15.946 "name": "Existed_Raid", 00:18:15.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.946 "strip_size_kb": 64, 00:18:15.946 "state": "configuring", 00:18:15.946 "raid_level": "raid5f", 00:18:15.946 "superblock": false, 00:18:15.946 "num_base_bdevs": 3, 00:18:15.946 "num_base_bdevs_discovered": 1, 00:18:15.946 "num_base_bdevs_operational": 3, 00:18:15.946 "base_bdevs_list": [ 00:18:15.946 { 00:18:15.946 "name": "BaseBdev1", 00:18:15.946 "uuid": "f6d6a2d0-1338-47db-9477-2d2b96b6ef72", 00:18:15.946 "is_configured": true, 00:18:15.946 "data_offset": 0, 00:18:15.946 "data_size": 65536 00:18:15.946 }, 00:18:15.946 { 00:18:15.946 "name": "BaseBdev2", 00:18:15.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.946 "is_configured": false, 00:18:15.946 "data_offset": 0, 00:18:15.947 "data_size": 0 00:18:15.947 }, 00:18:15.947 { 00:18:15.947 "name": "BaseBdev3", 00:18:15.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.947 "is_configured": false, 00:18:15.947 "data_offset": 0, 00:18:15.947 "data_size": 0 00:18:15.947 } 00:18:15.947 ] 00:18:15.947 }' 00:18:15.947 18:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:15.947 18:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.206 18:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:16.206 18:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.206 18:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.206 BaseBdev2 00:18:16.206 [2024-12-06 18:16:41.718937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:16.206 18:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.206 18:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:16.206 18:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:16.206 18:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:16.206 18:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:16.206 18:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:16.206 18:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:16.206 18:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:16.206 18:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.206 18:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.466 18:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.466 18:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:16.466 18:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.466 18:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.466 [ 00:18:16.466 { 00:18:16.466 "name": "BaseBdev2", 00:18:16.466 "aliases": [ 00:18:16.466 "f00a24e4-a989-472a-9099-44d66c0a9e48" 00:18:16.466 ], 00:18:16.466 "product_name": "Malloc disk", 00:18:16.466 "block_size": 512, 00:18:16.466 "num_blocks": 65536, 00:18:16.466 "uuid": "f00a24e4-a989-472a-9099-44d66c0a9e48", 00:18:16.466 "assigned_rate_limits": { 00:18:16.466 "rw_ios_per_sec": 0, 00:18:16.466 "rw_mbytes_per_sec": 0, 00:18:16.466 "r_mbytes_per_sec": 0, 00:18:16.466 "w_mbytes_per_sec": 0 00:18:16.466 }, 00:18:16.466 "claimed": true, 00:18:16.466 "claim_type": "exclusive_write", 00:18:16.466 "zoned": false, 00:18:16.466 "supported_io_types": { 00:18:16.466 "read": true, 00:18:16.466 "write": true, 00:18:16.466 "unmap": true, 00:18:16.466 "flush": true, 00:18:16.466 "reset": true, 00:18:16.466 "nvme_admin": false, 00:18:16.466 "nvme_io": false, 00:18:16.466 "nvme_io_md": false, 00:18:16.466 "write_zeroes": true, 00:18:16.466 "zcopy": true, 00:18:16.466 "get_zone_info": false, 00:18:16.466 "zone_management": false, 00:18:16.466 "zone_append": false, 00:18:16.466 "compare": false, 00:18:16.466 "compare_and_write": false, 00:18:16.466 "abort": true, 00:18:16.466 "seek_hole": false, 00:18:16.466 "seek_data": false, 00:18:16.466 "copy": true, 00:18:16.466 "nvme_iov_md": false 00:18:16.466 }, 00:18:16.466 "memory_domains": [ 00:18:16.466 { 00:18:16.466 "dma_device_id": "system", 00:18:16.466 "dma_device_type": 1 00:18:16.466 }, 00:18:16.466 { 00:18:16.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:16.466 "dma_device_type": 2 00:18:16.466 } 00:18:16.466 ], 00:18:16.466 "driver_specific": {} 00:18:16.466 } 00:18:16.466 ] 00:18:16.466 18:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.466 18:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:16.466 18:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:16.466 18:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:16.466 18:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:16.466 18:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:16.466 18:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:16.466 18:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:16.466 18:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:16.466 18:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:16.466 18:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.466 18:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.466 18:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.466 18:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.466 18:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.466 18:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.466 18:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.466 18:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:16.466 18:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.466 18:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.466 "name": "Existed_Raid", 00:18:16.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.466 "strip_size_kb": 64, 00:18:16.466 "state": "configuring", 00:18:16.466 "raid_level": "raid5f", 00:18:16.466 "superblock": false, 00:18:16.466 "num_base_bdevs": 3, 00:18:16.466 "num_base_bdevs_discovered": 2, 00:18:16.466 "num_base_bdevs_operational": 3, 00:18:16.466 "base_bdevs_list": [ 00:18:16.466 { 00:18:16.466 "name": "BaseBdev1", 00:18:16.466 "uuid": "f6d6a2d0-1338-47db-9477-2d2b96b6ef72", 00:18:16.466 "is_configured": true, 00:18:16.466 "data_offset": 0, 00:18:16.466 "data_size": 65536 00:18:16.466 }, 00:18:16.466 { 00:18:16.466 "name": "BaseBdev2", 00:18:16.466 "uuid": "f00a24e4-a989-472a-9099-44d66c0a9e48", 00:18:16.466 "is_configured": true, 00:18:16.466 "data_offset": 0, 00:18:16.466 "data_size": 65536 00:18:16.466 }, 00:18:16.466 { 00:18:16.466 "name": "BaseBdev3", 00:18:16.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.466 "is_configured": false, 00:18:16.466 "data_offset": 0, 00:18:16.466 "data_size": 0 00:18:16.466 } 00:18:16.466 ] 00:18:16.466 }' 00:18:16.466 18:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.466 18:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.034 18:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:17.034 18:16:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.034 18:16:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.034 [2024-12-06 18:16:42.301302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:17.034 [2024-12-06 18:16:42.301415] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:17.034 [2024-12-06 18:16:42.301441] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:18:17.034 [2024-12-06 18:16:42.301951] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:17.034 [2024-12-06 18:16:42.307207] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:17.034 [2024-12-06 18:16:42.307235] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:17.034 [2024-12-06 18:16:42.307597] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:17.034 BaseBdev3 00:18:17.034 18:16:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.034 18:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:18:17.034 18:16:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:17.034 18:16:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:17.034 18:16:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:17.034 18:16:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:17.034 18:16:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:17.034 18:16:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:17.034 18:16:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.034 18:16:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.034 18:16:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.034 18:16:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:17.034 18:16:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.034 18:16:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.034 [ 00:18:17.034 { 00:18:17.034 "name": "BaseBdev3", 00:18:17.034 "aliases": [ 00:18:17.034 "87ef9b83-0627-4440-8f3b-58e358892f8a" 00:18:17.034 ], 00:18:17.034 "product_name": "Malloc disk", 00:18:17.034 "block_size": 512, 00:18:17.034 "num_blocks": 65536, 00:18:17.034 "uuid": "87ef9b83-0627-4440-8f3b-58e358892f8a", 00:18:17.034 "assigned_rate_limits": { 00:18:17.034 "rw_ios_per_sec": 0, 00:18:17.034 "rw_mbytes_per_sec": 0, 00:18:17.034 "r_mbytes_per_sec": 0, 00:18:17.034 "w_mbytes_per_sec": 0 00:18:17.034 }, 00:18:17.034 "claimed": true, 00:18:17.034 "claim_type": "exclusive_write", 00:18:17.034 "zoned": false, 00:18:17.034 "supported_io_types": { 00:18:17.034 "read": true, 00:18:17.034 "write": true, 00:18:17.034 "unmap": true, 00:18:17.034 "flush": true, 00:18:17.034 "reset": true, 00:18:17.034 "nvme_admin": false, 00:18:17.034 "nvme_io": false, 00:18:17.034 "nvme_io_md": false, 00:18:17.034 "write_zeroes": true, 00:18:17.034 "zcopy": true, 00:18:17.034 "get_zone_info": false, 00:18:17.034 "zone_management": false, 00:18:17.034 "zone_append": false, 00:18:17.034 "compare": false, 00:18:17.034 "compare_and_write": false, 00:18:17.034 "abort": true, 00:18:17.034 "seek_hole": false, 00:18:17.034 "seek_data": false, 00:18:17.034 "copy": true, 00:18:17.034 "nvme_iov_md": false 00:18:17.034 }, 00:18:17.034 "memory_domains": [ 00:18:17.034 { 00:18:17.034 "dma_device_id": "system", 00:18:17.034 "dma_device_type": 1 00:18:17.034 }, 00:18:17.034 { 00:18:17.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:17.034 "dma_device_type": 2 00:18:17.034 } 00:18:17.034 ], 00:18:17.034 "driver_specific": {} 00:18:17.034 } 00:18:17.034 ] 00:18:17.034 18:16:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.034 18:16:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:17.034 18:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:17.034 18:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:17.034 18:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:18:17.034 18:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:17.034 18:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:17.034 18:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:17.034 18:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:17.034 18:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:17.034 18:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.034 18:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.034 18:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.034 18:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.034 18:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.034 18:16:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.034 18:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:17.034 18:16:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.034 18:16:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.034 18:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.034 "name": "Existed_Raid", 00:18:17.034 "uuid": "f9a63a3c-d7fb-46cb-9130-4ae756bc6596", 00:18:17.034 "strip_size_kb": 64, 00:18:17.034 "state": "online", 00:18:17.034 "raid_level": "raid5f", 00:18:17.034 "superblock": false, 00:18:17.035 "num_base_bdevs": 3, 00:18:17.035 "num_base_bdevs_discovered": 3, 00:18:17.035 "num_base_bdevs_operational": 3, 00:18:17.035 "base_bdevs_list": [ 00:18:17.035 { 00:18:17.035 "name": "BaseBdev1", 00:18:17.035 "uuid": "f6d6a2d0-1338-47db-9477-2d2b96b6ef72", 00:18:17.035 "is_configured": true, 00:18:17.035 "data_offset": 0, 00:18:17.035 "data_size": 65536 00:18:17.035 }, 00:18:17.035 { 00:18:17.035 "name": "BaseBdev2", 00:18:17.035 "uuid": "f00a24e4-a989-472a-9099-44d66c0a9e48", 00:18:17.035 "is_configured": true, 00:18:17.035 "data_offset": 0, 00:18:17.035 "data_size": 65536 00:18:17.035 }, 00:18:17.035 { 00:18:17.035 "name": "BaseBdev3", 00:18:17.035 "uuid": "87ef9b83-0627-4440-8f3b-58e358892f8a", 00:18:17.035 "is_configured": true, 00:18:17.035 "data_offset": 0, 00:18:17.035 "data_size": 65536 00:18:17.035 } 00:18:17.035 ] 00:18:17.035 }' 00:18:17.035 18:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.035 18:16:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.603 18:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:17.603 18:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:17.603 18:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:17.603 18:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:17.603 18:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:17.603 18:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:17.603 18:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:17.603 18:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:17.603 18:16:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.603 18:16:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.603 [2024-12-06 18:16:42.849732] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:17.603 18:16:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.603 18:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:17.603 "name": "Existed_Raid", 00:18:17.603 "aliases": [ 00:18:17.603 "f9a63a3c-d7fb-46cb-9130-4ae756bc6596" 00:18:17.603 ], 00:18:17.603 "product_name": "Raid Volume", 00:18:17.603 "block_size": 512, 00:18:17.603 "num_blocks": 131072, 00:18:17.603 "uuid": "f9a63a3c-d7fb-46cb-9130-4ae756bc6596", 00:18:17.603 "assigned_rate_limits": { 00:18:17.603 "rw_ios_per_sec": 0, 00:18:17.603 "rw_mbytes_per_sec": 0, 00:18:17.603 "r_mbytes_per_sec": 0, 00:18:17.603 "w_mbytes_per_sec": 0 00:18:17.603 }, 00:18:17.603 "claimed": false, 00:18:17.603 "zoned": false, 00:18:17.603 "supported_io_types": { 00:18:17.603 "read": true, 00:18:17.603 "write": true, 00:18:17.603 "unmap": false, 00:18:17.603 "flush": false, 00:18:17.603 "reset": true, 00:18:17.603 "nvme_admin": false, 00:18:17.603 "nvme_io": false, 00:18:17.603 "nvme_io_md": false, 00:18:17.603 "write_zeroes": true, 00:18:17.603 "zcopy": false, 00:18:17.603 "get_zone_info": false, 00:18:17.603 "zone_management": false, 00:18:17.603 "zone_append": false, 00:18:17.603 "compare": false, 00:18:17.603 "compare_and_write": false, 00:18:17.603 "abort": false, 00:18:17.603 "seek_hole": false, 00:18:17.603 "seek_data": false, 00:18:17.603 "copy": false, 00:18:17.603 "nvme_iov_md": false 00:18:17.603 }, 00:18:17.603 "driver_specific": { 00:18:17.603 "raid": { 00:18:17.603 "uuid": "f9a63a3c-d7fb-46cb-9130-4ae756bc6596", 00:18:17.603 "strip_size_kb": 64, 00:18:17.603 "state": "online", 00:18:17.603 "raid_level": "raid5f", 00:18:17.603 "superblock": false, 00:18:17.603 "num_base_bdevs": 3, 00:18:17.603 "num_base_bdevs_discovered": 3, 00:18:17.603 "num_base_bdevs_operational": 3, 00:18:17.603 "base_bdevs_list": [ 00:18:17.603 { 00:18:17.603 "name": "BaseBdev1", 00:18:17.603 "uuid": "f6d6a2d0-1338-47db-9477-2d2b96b6ef72", 00:18:17.603 "is_configured": true, 00:18:17.603 "data_offset": 0, 00:18:17.603 "data_size": 65536 00:18:17.603 }, 00:18:17.603 { 00:18:17.603 "name": "BaseBdev2", 00:18:17.603 "uuid": "f00a24e4-a989-472a-9099-44d66c0a9e48", 00:18:17.603 "is_configured": true, 00:18:17.603 "data_offset": 0, 00:18:17.603 "data_size": 65536 00:18:17.603 }, 00:18:17.603 { 00:18:17.603 "name": "BaseBdev3", 00:18:17.603 "uuid": "87ef9b83-0627-4440-8f3b-58e358892f8a", 00:18:17.604 "is_configured": true, 00:18:17.604 "data_offset": 0, 00:18:17.604 "data_size": 65536 00:18:17.604 } 00:18:17.604 ] 00:18:17.604 } 00:18:17.604 } 00:18:17.604 }' 00:18:17.604 18:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:17.604 18:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:17.604 BaseBdev2 00:18:17.604 BaseBdev3' 00:18:17.604 18:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:17.604 18:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:17.604 18:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:17.604 18:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:17.604 18:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:17.604 18:16:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.604 18:16:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.604 18:16:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.604 18:16:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:17.604 18:16:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:17.604 18:16:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:17.604 18:16:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:17.604 18:16:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.604 18:16:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.604 18:16:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:17.604 18:16:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.604 18:16:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:17.604 18:16:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:17.604 18:16:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:17.604 18:16:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:17.604 18:16:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:17.604 18:16:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.604 18:16:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.863 18:16:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.863 18:16:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:17.863 18:16:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:17.863 18:16:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:17.863 18:16:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.863 18:16:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.863 [2024-12-06 18:16:43.165622] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:17.863 18:16:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.863 18:16:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:17.863 18:16:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:18:17.863 18:16:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:17.863 18:16:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:17.863 18:16:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:17.863 18:16:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:18:17.863 18:16:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:17.863 18:16:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:17.863 18:16:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:17.863 18:16:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:17.863 18:16:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:17.863 18:16:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.863 18:16:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.863 18:16:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.863 18:16:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.863 18:16:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:17.863 18:16:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.863 18:16:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.863 18:16:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.863 18:16:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.863 18:16:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.863 "name": "Existed_Raid", 00:18:17.863 "uuid": "f9a63a3c-d7fb-46cb-9130-4ae756bc6596", 00:18:17.863 "strip_size_kb": 64, 00:18:17.863 "state": "online", 00:18:17.863 "raid_level": "raid5f", 00:18:17.863 "superblock": false, 00:18:17.863 "num_base_bdevs": 3, 00:18:17.863 "num_base_bdevs_discovered": 2, 00:18:17.863 "num_base_bdevs_operational": 2, 00:18:17.863 "base_bdevs_list": [ 00:18:17.863 { 00:18:17.863 "name": null, 00:18:17.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.863 "is_configured": false, 00:18:17.863 "data_offset": 0, 00:18:17.863 "data_size": 65536 00:18:17.863 }, 00:18:17.863 { 00:18:17.863 "name": "BaseBdev2", 00:18:17.863 "uuid": "f00a24e4-a989-472a-9099-44d66c0a9e48", 00:18:17.863 "is_configured": true, 00:18:17.863 "data_offset": 0, 00:18:17.863 "data_size": 65536 00:18:17.863 }, 00:18:17.863 { 00:18:17.863 "name": "BaseBdev3", 00:18:17.863 "uuid": "87ef9b83-0627-4440-8f3b-58e358892f8a", 00:18:17.863 "is_configured": true, 00:18:17.863 "data_offset": 0, 00:18:17.863 "data_size": 65536 00:18:17.863 } 00:18:17.863 ] 00:18:17.863 }' 00:18:17.863 18:16:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.863 18:16:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.510 18:16:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:18.510 18:16:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:18.510 18:16:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.510 18:16:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.510 18:16:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.510 18:16:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:18.510 18:16:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.510 18:16:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:18.510 18:16:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:18.510 18:16:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:18.510 18:16:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.510 18:16:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.510 [2024-12-06 18:16:43.812025] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:18.510 [2024-12-06 18:16:43.812316] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:18.510 [2024-12-06 18:16:43.898069] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:18.510 18:16:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.510 18:16:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:18.510 18:16:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:18.510 18:16:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.510 18:16:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:18.510 18:16:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.510 18:16:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.510 18:16:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.510 18:16:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:18.510 18:16:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:18.510 18:16:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:18.510 18:16:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.510 18:16:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.510 [2024-12-06 18:16:43.966152] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:18.510 [2024-12-06 18:16:43.966367] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:18.787 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.787 18:16:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:18.787 18:16:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:18.787 18:16:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.787 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.787 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.787 18:16:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:18.787 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.787 18:16:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:18.787 18:16:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:18.787 18:16:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:18:18.787 18:16:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:18.787 18:16:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:18.787 18:16:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:18.787 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.787 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.787 BaseBdev2 00:18:18.787 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.787 18:16:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:18.787 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:18.787 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:18.787 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:18.787 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:18.787 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:18.787 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:18.787 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.787 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.787 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.787 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:18.787 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.787 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.787 [ 00:18:18.787 { 00:18:18.787 "name": "BaseBdev2", 00:18:18.787 "aliases": [ 00:18:18.787 "559b5fa5-8d2e-4bbe-933d-43f6fccca2b4" 00:18:18.787 ], 00:18:18.787 "product_name": "Malloc disk", 00:18:18.787 "block_size": 512, 00:18:18.787 "num_blocks": 65536, 00:18:18.787 "uuid": "559b5fa5-8d2e-4bbe-933d-43f6fccca2b4", 00:18:18.787 "assigned_rate_limits": { 00:18:18.787 "rw_ios_per_sec": 0, 00:18:18.787 "rw_mbytes_per_sec": 0, 00:18:18.787 "r_mbytes_per_sec": 0, 00:18:18.787 "w_mbytes_per_sec": 0 00:18:18.787 }, 00:18:18.787 "claimed": false, 00:18:18.787 "zoned": false, 00:18:18.787 "supported_io_types": { 00:18:18.787 "read": true, 00:18:18.787 "write": true, 00:18:18.787 "unmap": true, 00:18:18.787 "flush": true, 00:18:18.787 "reset": true, 00:18:18.787 "nvme_admin": false, 00:18:18.787 "nvme_io": false, 00:18:18.787 "nvme_io_md": false, 00:18:18.787 "write_zeroes": true, 00:18:18.787 "zcopy": true, 00:18:18.787 "get_zone_info": false, 00:18:18.787 "zone_management": false, 00:18:18.787 "zone_append": false, 00:18:18.787 "compare": false, 00:18:18.787 "compare_and_write": false, 00:18:18.787 "abort": true, 00:18:18.787 "seek_hole": false, 00:18:18.787 "seek_data": false, 00:18:18.787 "copy": true, 00:18:18.787 "nvme_iov_md": false 00:18:18.787 }, 00:18:18.787 "memory_domains": [ 00:18:18.787 { 00:18:18.787 "dma_device_id": "system", 00:18:18.787 "dma_device_type": 1 00:18:18.787 }, 00:18:18.787 { 00:18:18.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.787 "dma_device_type": 2 00:18:18.787 } 00:18:18.787 ], 00:18:18.787 "driver_specific": {} 00:18:18.787 } 00:18:18.787 ] 00:18:18.787 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.787 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:18.787 18:16:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:18.787 18:16:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:18.787 18:16:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:18.787 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.787 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.787 BaseBdev3 00:18:18.788 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.788 18:16:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:18.788 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:18.788 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:18.788 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:18.788 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:18.788 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:18.788 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:18.788 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.788 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.788 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.788 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:18.788 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.788 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.788 [ 00:18:18.788 { 00:18:18.788 "name": "BaseBdev3", 00:18:18.788 "aliases": [ 00:18:18.788 "5ee291ef-c918-4053-9835-658a300519cc" 00:18:18.788 ], 00:18:18.788 "product_name": "Malloc disk", 00:18:18.788 "block_size": 512, 00:18:18.788 "num_blocks": 65536, 00:18:18.788 "uuid": "5ee291ef-c918-4053-9835-658a300519cc", 00:18:18.788 "assigned_rate_limits": { 00:18:18.788 "rw_ios_per_sec": 0, 00:18:18.788 "rw_mbytes_per_sec": 0, 00:18:18.788 "r_mbytes_per_sec": 0, 00:18:18.788 "w_mbytes_per_sec": 0 00:18:18.788 }, 00:18:18.788 "claimed": false, 00:18:18.788 "zoned": false, 00:18:18.788 "supported_io_types": { 00:18:18.788 "read": true, 00:18:18.788 "write": true, 00:18:18.788 "unmap": true, 00:18:18.788 "flush": true, 00:18:18.788 "reset": true, 00:18:18.788 "nvme_admin": false, 00:18:18.788 "nvme_io": false, 00:18:18.788 "nvme_io_md": false, 00:18:18.788 "write_zeroes": true, 00:18:18.788 "zcopy": true, 00:18:18.788 "get_zone_info": false, 00:18:18.788 "zone_management": false, 00:18:18.788 "zone_append": false, 00:18:18.788 "compare": false, 00:18:18.788 "compare_and_write": false, 00:18:18.788 "abort": true, 00:18:18.788 "seek_hole": false, 00:18:18.788 "seek_data": false, 00:18:18.788 "copy": true, 00:18:18.788 "nvme_iov_md": false 00:18:18.788 }, 00:18:18.788 "memory_domains": [ 00:18:18.788 { 00:18:18.788 "dma_device_id": "system", 00:18:18.788 "dma_device_type": 1 00:18:18.788 }, 00:18:18.788 { 00:18:18.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.788 "dma_device_type": 2 00:18:18.788 } 00:18:18.788 ], 00:18:18.788 "driver_specific": {} 00:18:18.788 } 00:18:18.788 ] 00:18:18.788 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.788 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:18.788 18:16:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:18.788 18:16:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:18.788 18:16:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:18.788 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.788 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.788 [2024-12-06 18:16:44.268314] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:18.788 [2024-12-06 18:16:44.268516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:18.788 [2024-12-06 18:16:44.268695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:18.788 [2024-12-06 18:16:44.271212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:18.788 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.788 18:16:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:18.788 18:16:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:18.788 18:16:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:18.788 18:16:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:18.788 18:16:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:18.788 18:16:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:18.788 18:16:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.788 18:16:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.788 18:16:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.788 18:16:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.788 18:16:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.788 18:16:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:18.788 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.788 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.788 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.048 18:16:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.048 "name": "Existed_Raid", 00:18:19.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.048 "strip_size_kb": 64, 00:18:19.048 "state": "configuring", 00:18:19.048 "raid_level": "raid5f", 00:18:19.048 "superblock": false, 00:18:19.048 "num_base_bdevs": 3, 00:18:19.048 "num_base_bdevs_discovered": 2, 00:18:19.048 "num_base_bdevs_operational": 3, 00:18:19.048 "base_bdevs_list": [ 00:18:19.048 { 00:18:19.048 "name": "BaseBdev1", 00:18:19.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.048 "is_configured": false, 00:18:19.048 "data_offset": 0, 00:18:19.048 "data_size": 0 00:18:19.048 }, 00:18:19.048 { 00:18:19.048 "name": "BaseBdev2", 00:18:19.048 "uuid": "559b5fa5-8d2e-4bbe-933d-43f6fccca2b4", 00:18:19.048 "is_configured": true, 00:18:19.048 "data_offset": 0, 00:18:19.048 "data_size": 65536 00:18:19.048 }, 00:18:19.048 { 00:18:19.048 "name": "BaseBdev3", 00:18:19.048 "uuid": "5ee291ef-c918-4053-9835-658a300519cc", 00:18:19.048 "is_configured": true, 00:18:19.048 "data_offset": 0, 00:18:19.048 "data_size": 65536 00:18:19.048 } 00:18:19.048 ] 00:18:19.048 }' 00:18:19.048 18:16:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.048 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.307 18:16:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:19.307 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.307 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.307 [2024-12-06 18:16:44.764447] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:19.307 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.307 18:16:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:19.307 18:16:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:19.307 18:16:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:19.307 18:16:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:19.307 18:16:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:19.307 18:16:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:19.307 18:16:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.307 18:16:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.307 18:16:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.307 18:16:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.307 18:16:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.307 18:16:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:19.307 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.307 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.307 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.307 18:16:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.307 "name": "Existed_Raid", 00:18:19.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.307 "strip_size_kb": 64, 00:18:19.307 "state": "configuring", 00:18:19.307 "raid_level": "raid5f", 00:18:19.307 "superblock": false, 00:18:19.307 "num_base_bdevs": 3, 00:18:19.307 "num_base_bdevs_discovered": 1, 00:18:19.307 "num_base_bdevs_operational": 3, 00:18:19.307 "base_bdevs_list": [ 00:18:19.307 { 00:18:19.307 "name": "BaseBdev1", 00:18:19.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.307 "is_configured": false, 00:18:19.307 "data_offset": 0, 00:18:19.307 "data_size": 0 00:18:19.307 }, 00:18:19.307 { 00:18:19.307 "name": null, 00:18:19.307 "uuid": "559b5fa5-8d2e-4bbe-933d-43f6fccca2b4", 00:18:19.307 "is_configured": false, 00:18:19.307 "data_offset": 0, 00:18:19.307 "data_size": 65536 00:18:19.307 }, 00:18:19.307 { 00:18:19.307 "name": "BaseBdev3", 00:18:19.307 "uuid": "5ee291ef-c918-4053-9835-658a300519cc", 00:18:19.307 "is_configured": true, 00:18:19.307 "data_offset": 0, 00:18:19.307 "data_size": 65536 00:18:19.307 } 00:18:19.307 ] 00:18:19.307 }' 00:18:19.307 18:16:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.307 18:16:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.874 18:16:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:19.874 18:16:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.874 18:16:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.874 18:16:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.874 18:16:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.874 18:16:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:19.874 18:16:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:19.874 18:16:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.874 18:16:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.874 [2024-12-06 18:16:45.382846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:19.874 BaseBdev1 00:18:19.874 18:16:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.874 18:16:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:19.874 18:16:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:19.874 18:16:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:19.874 18:16:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:19.874 18:16:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:19.874 18:16:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:19.874 18:16:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:19.874 18:16:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.874 18:16:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.132 18:16:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.132 18:16:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:20.132 18:16:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.132 18:16:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.132 [ 00:18:20.132 { 00:18:20.132 "name": "BaseBdev1", 00:18:20.132 "aliases": [ 00:18:20.132 "5685acb4-716a-4b79-89bb-323413a1b0e8" 00:18:20.132 ], 00:18:20.132 "product_name": "Malloc disk", 00:18:20.132 "block_size": 512, 00:18:20.132 "num_blocks": 65536, 00:18:20.132 "uuid": "5685acb4-716a-4b79-89bb-323413a1b0e8", 00:18:20.132 "assigned_rate_limits": { 00:18:20.132 "rw_ios_per_sec": 0, 00:18:20.132 "rw_mbytes_per_sec": 0, 00:18:20.132 "r_mbytes_per_sec": 0, 00:18:20.132 "w_mbytes_per_sec": 0 00:18:20.132 }, 00:18:20.132 "claimed": true, 00:18:20.132 "claim_type": "exclusive_write", 00:18:20.132 "zoned": false, 00:18:20.132 "supported_io_types": { 00:18:20.132 "read": true, 00:18:20.132 "write": true, 00:18:20.132 "unmap": true, 00:18:20.132 "flush": true, 00:18:20.132 "reset": true, 00:18:20.132 "nvme_admin": false, 00:18:20.132 "nvme_io": false, 00:18:20.132 "nvme_io_md": false, 00:18:20.132 "write_zeroes": true, 00:18:20.132 "zcopy": true, 00:18:20.132 "get_zone_info": false, 00:18:20.132 "zone_management": false, 00:18:20.132 "zone_append": false, 00:18:20.132 "compare": false, 00:18:20.132 "compare_and_write": false, 00:18:20.132 "abort": true, 00:18:20.132 "seek_hole": false, 00:18:20.132 "seek_data": false, 00:18:20.132 "copy": true, 00:18:20.132 "nvme_iov_md": false 00:18:20.132 }, 00:18:20.132 "memory_domains": [ 00:18:20.132 { 00:18:20.132 "dma_device_id": "system", 00:18:20.132 "dma_device_type": 1 00:18:20.132 }, 00:18:20.132 { 00:18:20.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:20.132 "dma_device_type": 2 00:18:20.132 } 00:18:20.132 ], 00:18:20.132 "driver_specific": {} 00:18:20.132 } 00:18:20.132 ] 00:18:20.132 18:16:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.132 18:16:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:20.132 18:16:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:20.132 18:16:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:20.132 18:16:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:20.132 18:16:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:20.132 18:16:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:20.132 18:16:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:20.132 18:16:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.133 18:16:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.133 18:16:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.133 18:16:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.133 18:16:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.133 18:16:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:20.133 18:16:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.133 18:16:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.133 18:16:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.133 18:16:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.133 "name": "Existed_Raid", 00:18:20.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.133 "strip_size_kb": 64, 00:18:20.133 "state": "configuring", 00:18:20.133 "raid_level": "raid5f", 00:18:20.133 "superblock": false, 00:18:20.133 "num_base_bdevs": 3, 00:18:20.133 "num_base_bdevs_discovered": 2, 00:18:20.133 "num_base_bdevs_operational": 3, 00:18:20.133 "base_bdevs_list": [ 00:18:20.133 { 00:18:20.133 "name": "BaseBdev1", 00:18:20.133 "uuid": "5685acb4-716a-4b79-89bb-323413a1b0e8", 00:18:20.133 "is_configured": true, 00:18:20.133 "data_offset": 0, 00:18:20.133 "data_size": 65536 00:18:20.133 }, 00:18:20.133 { 00:18:20.133 "name": null, 00:18:20.133 "uuid": "559b5fa5-8d2e-4bbe-933d-43f6fccca2b4", 00:18:20.133 "is_configured": false, 00:18:20.133 "data_offset": 0, 00:18:20.133 "data_size": 65536 00:18:20.133 }, 00:18:20.133 { 00:18:20.133 "name": "BaseBdev3", 00:18:20.133 "uuid": "5ee291ef-c918-4053-9835-658a300519cc", 00:18:20.133 "is_configured": true, 00:18:20.133 "data_offset": 0, 00:18:20.133 "data_size": 65536 00:18:20.133 } 00:18:20.133 ] 00:18:20.133 }' 00:18:20.133 18:16:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.133 18:16:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.699 18:16:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.699 18:16:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:20.699 18:16:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.699 18:16:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.699 18:16:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.699 18:16:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:20.699 18:16:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:20.699 18:16:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.699 18:16:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.699 [2024-12-06 18:16:45.975063] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:20.699 18:16:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.699 18:16:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:20.699 18:16:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:20.699 18:16:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:20.699 18:16:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:20.699 18:16:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:20.699 18:16:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:20.699 18:16:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.699 18:16:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.699 18:16:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.699 18:16:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.699 18:16:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.699 18:16:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.699 18:16:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:20.699 18:16:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.699 18:16:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.699 18:16:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.699 "name": "Existed_Raid", 00:18:20.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.699 "strip_size_kb": 64, 00:18:20.699 "state": "configuring", 00:18:20.699 "raid_level": "raid5f", 00:18:20.699 "superblock": false, 00:18:20.699 "num_base_bdevs": 3, 00:18:20.699 "num_base_bdevs_discovered": 1, 00:18:20.699 "num_base_bdevs_operational": 3, 00:18:20.699 "base_bdevs_list": [ 00:18:20.699 { 00:18:20.699 "name": "BaseBdev1", 00:18:20.699 "uuid": "5685acb4-716a-4b79-89bb-323413a1b0e8", 00:18:20.699 "is_configured": true, 00:18:20.699 "data_offset": 0, 00:18:20.699 "data_size": 65536 00:18:20.699 }, 00:18:20.699 { 00:18:20.699 "name": null, 00:18:20.699 "uuid": "559b5fa5-8d2e-4bbe-933d-43f6fccca2b4", 00:18:20.699 "is_configured": false, 00:18:20.699 "data_offset": 0, 00:18:20.699 "data_size": 65536 00:18:20.699 }, 00:18:20.699 { 00:18:20.699 "name": null, 00:18:20.699 "uuid": "5ee291ef-c918-4053-9835-658a300519cc", 00:18:20.699 "is_configured": false, 00:18:20.699 "data_offset": 0, 00:18:20.699 "data_size": 65536 00:18:20.699 } 00:18:20.699 ] 00:18:20.699 }' 00:18:20.699 18:16:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.699 18:16:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.958 18:16:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:20.958 18:16:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.958 18:16:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.958 18:16:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.217 18:16:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.217 18:16:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:21.217 18:16:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:21.217 18:16:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.217 18:16:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.217 [2024-12-06 18:16:46.515237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:21.217 18:16:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.217 18:16:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:21.217 18:16:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:21.217 18:16:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:21.217 18:16:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:21.217 18:16:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:21.217 18:16:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:21.217 18:16:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.217 18:16:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.217 18:16:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.217 18:16:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.217 18:16:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.217 18:16:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:21.217 18:16:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.217 18:16:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.217 18:16:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.217 18:16:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.217 "name": "Existed_Raid", 00:18:21.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.217 "strip_size_kb": 64, 00:18:21.217 "state": "configuring", 00:18:21.217 "raid_level": "raid5f", 00:18:21.217 "superblock": false, 00:18:21.217 "num_base_bdevs": 3, 00:18:21.217 "num_base_bdevs_discovered": 2, 00:18:21.217 "num_base_bdevs_operational": 3, 00:18:21.217 "base_bdevs_list": [ 00:18:21.217 { 00:18:21.217 "name": "BaseBdev1", 00:18:21.217 "uuid": "5685acb4-716a-4b79-89bb-323413a1b0e8", 00:18:21.217 "is_configured": true, 00:18:21.217 "data_offset": 0, 00:18:21.217 "data_size": 65536 00:18:21.217 }, 00:18:21.217 { 00:18:21.217 "name": null, 00:18:21.217 "uuid": "559b5fa5-8d2e-4bbe-933d-43f6fccca2b4", 00:18:21.217 "is_configured": false, 00:18:21.217 "data_offset": 0, 00:18:21.217 "data_size": 65536 00:18:21.217 }, 00:18:21.217 { 00:18:21.217 "name": "BaseBdev3", 00:18:21.217 "uuid": "5ee291ef-c918-4053-9835-658a300519cc", 00:18:21.217 "is_configured": true, 00:18:21.217 "data_offset": 0, 00:18:21.217 "data_size": 65536 00:18:21.217 } 00:18:21.217 ] 00:18:21.217 }' 00:18:21.217 18:16:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.217 18:16:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.785 18:16:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:21.785 18:16:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.785 18:16:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.785 18:16:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.785 18:16:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.785 18:16:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:21.785 18:16:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:21.785 18:16:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.785 18:16:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.785 [2024-12-06 18:16:47.055376] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:21.785 18:16:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.785 18:16:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:21.785 18:16:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:21.785 18:16:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:21.785 18:16:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:21.785 18:16:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:21.785 18:16:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:21.785 18:16:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.785 18:16:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.785 18:16:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.785 18:16:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.785 18:16:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:21.785 18:16:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.785 18:16:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.785 18:16:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.785 18:16:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.785 18:16:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.785 "name": "Existed_Raid", 00:18:21.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.785 "strip_size_kb": 64, 00:18:21.785 "state": "configuring", 00:18:21.785 "raid_level": "raid5f", 00:18:21.785 "superblock": false, 00:18:21.785 "num_base_bdevs": 3, 00:18:21.785 "num_base_bdevs_discovered": 1, 00:18:21.785 "num_base_bdevs_operational": 3, 00:18:21.785 "base_bdevs_list": [ 00:18:21.785 { 00:18:21.785 "name": null, 00:18:21.785 "uuid": "5685acb4-716a-4b79-89bb-323413a1b0e8", 00:18:21.785 "is_configured": false, 00:18:21.785 "data_offset": 0, 00:18:21.785 "data_size": 65536 00:18:21.785 }, 00:18:21.785 { 00:18:21.785 "name": null, 00:18:21.785 "uuid": "559b5fa5-8d2e-4bbe-933d-43f6fccca2b4", 00:18:21.785 "is_configured": false, 00:18:21.785 "data_offset": 0, 00:18:21.785 "data_size": 65536 00:18:21.785 }, 00:18:21.785 { 00:18:21.785 "name": "BaseBdev3", 00:18:21.785 "uuid": "5ee291ef-c918-4053-9835-658a300519cc", 00:18:21.785 "is_configured": true, 00:18:21.785 "data_offset": 0, 00:18:21.785 "data_size": 65536 00:18:21.785 } 00:18:21.785 ] 00:18:21.785 }' 00:18:21.785 18:16:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.786 18:16:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.353 18:16:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.353 18:16:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:22.353 18:16:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.353 18:16:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.353 18:16:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.353 18:16:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:18:22.353 18:16:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:22.353 18:16:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.353 18:16:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.353 [2024-12-06 18:16:47.624923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:22.353 18:16:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.353 18:16:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:22.353 18:16:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:22.353 18:16:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:22.353 18:16:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:22.353 18:16:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:22.353 18:16:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:22.353 18:16:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.353 18:16:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.353 18:16:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.353 18:16:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.353 18:16:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.353 18:16:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.353 18:16:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.353 18:16:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:22.353 18:16:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.353 18:16:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.353 "name": "Existed_Raid", 00:18:22.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.353 "strip_size_kb": 64, 00:18:22.353 "state": "configuring", 00:18:22.353 "raid_level": "raid5f", 00:18:22.353 "superblock": false, 00:18:22.353 "num_base_bdevs": 3, 00:18:22.353 "num_base_bdevs_discovered": 2, 00:18:22.353 "num_base_bdevs_operational": 3, 00:18:22.353 "base_bdevs_list": [ 00:18:22.353 { 00:18:22.353 "name": null, 00:18:22.353 "uuid": "5685acb4-716a-4b79-89bb-323413a1b0e8", 00:18:22.353 "is_configured": false, 00:18:22.353 "data_offset": 0, 00:18:22.353 "data_size": 65536 00:18:22.353 }, 00:18:22.353 { 00:18:22.353 "name": "BaseBdev2", 00:18:22.353 "uuid": "559b5fa5-8d2e-4bbe-933d-43f6fccca2b4", 00:18:22.353 "is_configured": true, 00:18:22.353 "data_offset": 0, 00:18:22.353 "data_size": 65536 00:18:22.353 }, 00:18:22.353 { 00:18:22.354 "name": "BaseBdev3", 00:18:22.354 "uuid": "5ee291ef-c918-4053-9835-658a300519cc", 00:18:22.354 "is_configured": true, 00:18:22.354 "data_offset": 0, 00:18:22.354 "data_size": 65536 00:18:22.354 } 00:18:22.354 ] 00:18:22.354 }' 00:18:22.354 18:16:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.354 18:16:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.922 18:16:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.922 18:16:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:22.922 18:16:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.922 18:16:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.922 18:16:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.922 18:16:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:22.922 18:16:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.922 18:16:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.922 18:16:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.922 18:16:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:22.922 18:16:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.922 18:16:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5685acb4-716a-4b79-89bb-323413a1b0e8 00:18:22.922 18:16:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.922 18:16:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.922 [2024-12-06 18:16:48.294630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:22.922 [2024-12-06 18:16:48.294697] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:22.922 [2024-12-06 18:16:48.294713] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:18:22.922 [2024-12-06 18:16:48.295070] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:22.922 [2024-12-06 18:16:48.299903] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:22.922 NewBaseBdev 00:18:22.922 [2024-12-06 18:16:48.300065] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:18:22.922 [2024-12-06 18:16:48.300405] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:22.922 18:16:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.922 18:16:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:18:22.922 18:16:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:18:22.922 18:16:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:22.922 18:16:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:22.922 18:16:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:22.922 18:16:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:22.922 18:16:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:22.922 18:16:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.922 18:16:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.922 18:16:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.922 18:16:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:22.922 18:16:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.922 18:16:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.922 [ 00:18:22.922 { 00:18:22.922 "name": "NewBaseBdev", 00:18:22.922 "aliases": [ 00:18:22.922 "5685acb4-716a-4b79-89bb-323413a1b0e8" 00:18:22.922 ], 00:18:22.922 "product_name": "Malloc disk", 00:18:22.922 "block_size": 512, 00:18:22.922 "num_blocks": 65536, 00:18:22.922 "uuid": "5685acb4-716a-4b79-89bb-323413a1b0e8", 00:18:22.922 "assigned_rate_limits": { 00:18:22.922 "rw_ios_per_sec": 0, 00:18:22.922 "rw_mbytes_per_sec": 0, 00:18:22.922 "r_mbytes_per_sec": 0, 00:18:22.922 "w_mbytes_per_sec": 0 00:18:22.922 }, 00:18:22.922 "claimed": true, 00:18:22.922 "claim_type": "exclusive_write", 00:18:22.922 "zoned": false, 00:18:22.922 "supported_io_types": { 00:18:22.922 "read": true, 00:18:22.922 "write": true, 00:18:22.922 "unmap": true, 00:18:22.922 "flush": true, 00:18:22.922 "reset": true, 00:18:22.922 "nvme_admin": false, 00:18:22.922 "nvme_io": false, 00:18:22.922 "nvme_io_md": false, 00:18:22.922 "write_zeroes": true, 00:18:22.922 "zcopy": true, 00:18:22.922 "get_zone_info": false, 00:18:22.922 "zone_management": false, 00:18:22.922 "zone_append": false, 00:18:22.922 "compare": false, 00:18:22.922 "compare_and_write": false, 00:18:22.922 "abort": true, 00:18:22.922 "seek_hole": false, 00:18:22.922 "seek_data": false, 00:18:22.922 "copy": true, 00:18:22.922 "nvme_iov_md": false 00:18:22.922 }, 00:18:22.922 "memory_domains": [ 00:18:22.922 { 00:18:22.922 "dma_device_id": "system", 00:18:22.922 "dma_device_type": 1 00:18:22.922 }, 00:18:22.922 { 00:18:22.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.922 "dma_device_type": 2 00:18:22.922 } 00:18:22.922 ], 00:18:22.922 "driver_specific": {} 00:18:22.922 } 00:18:22.922 ] 00:18:22.922 18:16:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.922 18:16:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:22.922 18:16:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:18:22.922 18:16:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:22.922 18:16:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:22.922 18:16:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:22.922 18:16:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:22.922 18:16:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:22.922 18:16:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.922 18:16:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.922 18:16:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.922 18:16:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.922 18:16:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.922 18:16:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.922 18:16:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.922 18:16:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:22.922 18:16:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.922 18:16:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.922 "name": "Existed_Raid", 00:18:22.922 "uuid": "9e1a5278-978b-4a7a-9fc7-83196fc739b7", 00:18:22.922 "strip_size_kb": 64, 00:18:22.922 "state": "online", 00:18:22.922 "raid_level": "raid5f", 00:18:22.922 "superblock": false, 00:18:22.922 "num_base_bdevs": 3, 00:18:22.922 "num_base_bdevs_discovered": 3, 00:18:22.922 "num_base_bdevs_operational": 3, 00:18:22.922 "base_bdevs_list": [ 00:18:22.922 { 00:18:22.922 "name": "NewBaseBdev", 00:18:22.922 "uuid": "5685acb4-716a-4b79-89bb-323413a1b0e8", 00:18:22.922 "is_configured": true, 00:18:22.922 "data_offset": 0, 00:18:22.922 "data_size": 65536 00:18:22.922 }, 00:18:22.922 { 00:18:22.922 "name": "BaseBdev2", 00:18:22.922 "uuid": "559b5fa5-8d2e-4bbe-933d-43f6fccca2b4", 00:18:22.922 "is_configured": true, 00:18:22.922 "data_offset": 0, 00:18:22.922 "data_size": 65536 00:18:22.922 }, 00:18:22.922 { 00:18:22.922 "name": "BaseBdev3", 00:18:22.922 "uuid": "5ee291ef-c918-4053-9835-658a300519cc", 00:18:22.922 "is_configured": true, 00:18:22.922 "data_offset": 0, 00:18:22.922 "data_size": 65536 00:18:22.922 } 00:18:22.922 ] 00:18:22.922 }' 00:18:22.922 18:16:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.922 18:16:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.490 18:16:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:18:23.490 18:16:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:23.490 18:16:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:23.490 18:16:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:23.490 18:16:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:23.490 18:16:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:23.490 18:16:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:23.490 18:16:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:23.490 18:16:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.490 18:16:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.490 [2024-12-06 18:16:48.850335] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:23.490 18:16:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.490 18:16:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:23.490 "name": "Existed_Raid", 00:18:23.490 "aliases": [ 00:18:23.490 "9e1a5278-978b-4a7a-9fc7-83196fc739b7" 00:18:23.490 ], 00:18:23.490 "product_name": "Raid Volume", 00:18:23.490 "block_size": 512, 00:18:23.490 "num_blocks": 131072, 00:18:23.490 "uuid": "9e1a5278-978b-4a7a-9fc7-83196fc739b7", 00:18:23.490 "assigned_rate_limits": { 00:18:23.490 "rw_ios_per_sec": 0, 00:18:23.490 "rw_mbytes_per_sec": 0, 00:18:23.490 "r_mbytes_per_sec": 0, 00:18:23.490 "w_mbytes_per_sec": 0 00:18:23.490 }, 00:18:23.490 "claimed": false, 00:18:23.490 "zoned": false, 00:18:23.490 "supported_io_types": { 00:18:23.490 "read": true, 00:18:23.490 "write": true, 00:18:23.490 "unmap": false, 00:18:23.490 "flush": false, 00:18:23.490 "reset": true, 00:18:23.490 "nvme_admin": false, 00:18:23.490 "nvme_io": false, 00:18:23.490 "nvme_io_md": false, 00:18:23.490 "write_zeroes": true, 00:18:23.490 "zcopy": false, 00:18:23.490 "get_zone_info": false, 00:18:23.490 "zone_management": false, 00:18:23.490 "zone_append": false, 00:18:23.490 "compare": false, 00:18:23.490 "compare_and_write": false, 00:18:23.490 "abort": false, 00:18:23.490 "seek_hole": false, 00:18:23.490 "seek_data": false, 00:18:23.490 "copy": false, 00:18:23.490 "nvme_iov_md": false 00:18:23.490 }, 00:18:23.490 "driver_specific": { 00:18:23.490 "raid": { 00:18:23.490 "uuid": "9e1a5278-978b-4a7a-9fc7-83196fc739b7", 00:18:23.490 "strip_size_kb": 64, 00:18:23.490 "state": "online", 00:18:23.490 "raid_level": "raid5f", 00:18:23.490 "superblock": false, 00:18:23.490 "num_base_bdevs": 3, 00:18:23.490 "num_base_bdevs_discovered": 3, 00:18:23.490 "num_base_bdevs_operational": 3, 00:18:23.490 "base_bdevs_list": [ 00:18:23.490 { 00:18:23.490 "name": "NewBaseBdev", 00:18:23.490 "uuid": "5685acb4-716a-4b79-89bb-323413a1b0e8", 00:18:23.490 "is_configured": true, 00:18:23.490 "data_offset": 0, 00:18:23.490 "data_size": 65536 00:18:23.490 }, 00:18:23.490 { 00:18:23.490 "name": "BaseBdev2", 00:18:23.490 "uuid": "559b5fa5-8d2e-4bbe-933d-43f6fccca2b4", 00:18:23.490 "is_configured": true, 00:18:23.490 "data_offset": 0, 00:18:23.490 "data_size": 65536 00:18:23.490 }, 00:18:23.490 { 00:18:23.490 "name": "BaseBdev3", 00:18:23.490 "uuid": "5ee291ef-c918-4053-9835-658a300519cc", 00:18:23.490 "is_configured": true, 00:18:23.490 "data_offset": 0, 00:18:23.490 "data_size": 65536 00:18:23.490 } 00:18:23.490 ] 00:18:23.490 } 00:18:23.490 } 00:18:23.490 }' 00:18:23.490 18:16:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:23.490 18:16:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:18:23.490 BaseBdev2 00:18:23.490 BaseBdev3' 00:18:23.490 18:16:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:23.490 18:16:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:23.490 18:16:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:23.490 18:16:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:18:23.491 18:16:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.491 18:16:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.491 18:16:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:23.749 18:16:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.749 18:16:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:23.749 18:16:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:23.749 18:16:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:23.749 18:16:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:23.749 18:16:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:23.749 18:16:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.749 18:16:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.749 18:16:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.749 18:16:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:23.749 18:16:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:23.749 18:16:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:23.749 18:16:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:23.749 18:16:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:23.749 18:16:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.749 18:16:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.749 18:16:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.749 18:16:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:23.749 18:16:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:23.749 18:16:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:23.749 18:16:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.749 18:16:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.749 [2024-12-06 18:16:49.190189] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:23.749 [2024-12-06 18:16:49.190339] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:23.749 [2024-12-06 18:16:49.190605] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:23.749 [2024-12-06 18:16:49.191085] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:23.749 [2024-12-06 18:16:49.191242] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:18:23.749 18:16:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.749 18:16:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80275 00:18:23.749 18:16:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80275 ']' 00:18:23.749 18:16:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80275 00:18:23.749 18:16:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:18:23.749 18:16:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:23.749 18:16:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80275 00:18:23.749 killing process with pid 80275 00:18:23.750 18:16:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:23.750 18:16:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:23.750 18:16:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80275' 00:18:23.750 18:16:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 80275 00:18:23.750 [2024-12-06 18:16:49.228807] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:23.750 18:16:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 80275 00:18:24.010 [2024-12-06 18:16:49.501678] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:25.384 18:16:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:18:25.384 00:18:25.384 real 0m11.564s 00:18:25.384 user 0m19.093s 00:18:25.384 sys 0m1.609s 00:18:25.384 18:16:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:25.384 18:16:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.384 ************************************ 00:18:25.384 END TEST raid5f_state_function_test 00:18:25.384 ************************************ 00:18:25.385 18:16:50 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:18:25.385 18:16:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:25.385 18:16:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:25.385 18:16:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:25.385 ************************************ 00:18:25.385 START TEST raid5f_state_function_test_sb 00:18:25.385 ************************************ 00:18:25.385 18:16:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:18:25.385 18:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:18:25.385 18:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:18:25.385 18:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:25.385 18:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:25.385 18:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:25.385 18:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:25.385 18:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:25.385 18:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:25.385 18:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:25.385 18:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:25.385 18:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:25.385 18:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:25.385 18:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:18:25.385 18:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:25.385 18:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:25.385 18:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:25.385 18:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:25.385 18:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:25.385 18:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:25.385 18:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:25.385 18:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:25.385 18:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:18:25.385 18:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:18:25.385 18:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:18:25.385 18:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:25.385 18:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:25.385 18:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80908 00:18:25.385 18:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:25.385 Process raid pid: 80908 00:18:25.385 18:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80908' 00:18:25.385 18:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80908 00:18:25.385 18:16:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80908 ']' 00:18:25.385 18:16:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.385 18:16:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:25.385 18:16:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.385 18:16:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:25.385 18:16:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.385 [2024-12-06 18:16:50.737280] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:18:25.385 [2024-12-06 18:16:50.737680] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:25.647 [2024-12-06 18:16:50.912880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.647 [2024-12-06 18:16:51.053071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.908 [2024-12-06 18:16:51.269571] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:25.908 [2024-12-06 18:16:51.269741] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:26.474 18:16:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:26.474 18:16:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:18:26.474 18:16:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:26.474 18:16:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.474 18:16:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.474 [2024-12-06 18:16:51.743513] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:26.474 [2024-12-06 18:16:51.743581] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:26.474 [2024-12-06 18:16:51.743602] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:26.474 [2024-12-06 18:16:51.743619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:26.474 [2024-12-06 18:16:51.743629] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:26.474 [2024-12-06 18:16:51.743643] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:26.474 18:16:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.474 18:16:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:26.474 18:16:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:26.474 18:16:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:26.474 18:16:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:26.474 18:16:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:26.474 18:16:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:26.474 18:16:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:26.474 18:16:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:26.474 18:16:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:26.474 18:16:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:26.474 18:16:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:26.474 18:16:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.474 18:16:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.474 18:16:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.474 18:16:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.474 18:16:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.474 "name": "Existed_Raid", 00:18:26.474 "uuid": "f643a0f5-609f-40ca-b35d-1a01f6c8d523", 00:18:26.474 "strip_size_kb": 64, 00:18:26.474 "state": "configuring", 00:18:26.474 "raid_level": "raid5f", 00:18:26.474 "superblock": true, 00:18:26.474 "num_base_bdevs": 3, 00:18:26.474 "num_base_bdevs_discovered": 0, 00:18:26.474 "num_base_bdevs_operational": 3, 00:18:26.474 "base_bdevs_list": [ 00:18:26.474 { 00:18:26.474 "name": "BaseBdev1", 00:18:26.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.474 "is_configured": false, 00:18:26.474 "data_offset": 0, 00:18:26.474 "data_size": 0 00:18:26.474 }, 00:18:26.474 { 00:18:26.474 "name": "BaseBdev2", 00:18:26.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.474 "is_configured": false, 00:18:26.474 "data_offset": 0, 00:18:26.474 "data_size": 0 00:18:26.474 }, 00:18:26.474 { 00:18:26.474 "name": "BaseBdev3", 00:18:26.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.475 "is_configured": false, 00:18:26.475 "data_offset": 0, 00:18:26.475 "data_size": 0 00:18:26.475 } 00:18:26.475 ] 00:18:26.475 }' 00:18:26.475 18:16:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.475 18:16:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.041 18:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:27.041 18:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.041 18:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.041 [2024-12-06 18:16:52.263612] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:27.041 [2024-12-06 18:16:52.263667] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:27.041 18:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.041 18:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:27.041 18:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.041 18:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.041 [2024-12-06 18:16:52.271569] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:27.041 [2024-12-06 18:16:52.271794] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:27.041 [2024-12-06 18:16:52.271918] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:27.041 [2024-12-06 18:16:52.271984] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:27.041 [2024-12-06 18:16:52.272098] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:27.041 [2024-12-06 18:16:52.272157] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:27.041 18:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.041 18:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:27.041 18:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.041 18:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.041 [2024-12-06 18:16:52.317573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:27.041 BaseBdev1 00:18:27.041 18:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.041 18:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:27.041 18:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:27.041 18:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:27.041 18:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:27.041 18:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:27.041 18:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:27.041 18:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:27.041 18:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.041 18:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.041 18:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.041 18:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:27.041 18:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.041 18:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.041 [ 00:18:27.041 { 00:18:27.041 "name": "BaseBdev1", 00:18:27.041 "aliases": [ 00:18:27.041 "bec045a9-ef00-44e7-9df1-f274bf5f9b0c" 00:18:27.041 ], 00:18:27.041 "product_name": "Malloc disk", 00:18:27.041 "block_size": 512, 00:18:27.041 "num_blocks": 65536, 00:18:27.041 "uuid": "bec045a9-ef00-44e7-9df1-f274bf5f9b0c", 00:18:27.041 "assigned_rate_limits": { 00:18:27.041 "rw_ios_per_sec": 0, 00:18:27.041 "rw_mbytes_per_sec": 0, 00:18:27.041 "r_mbytes_per_sec": 0, 00:18:27.041 "w_mbytes_per_sec": 0 00:18:27.041 }, 00:18:27.041 "claimed": true, 00:18:27.041 "claim_type": "exclusive_write", 00:18:27.041 "zoned": false, 00:18:27.041 "supported_io_types": { 00:18:27.041 "read": true, 00:18:27.041 "write": true, 00:18:27.041 "unmap": true, 00:18:27.041 "flush": true, 00:18:27.041 "reset": true, 00:18:27.041 "nvme_admin": false, 00:18:27.041 "nvme_io": false, 00:18:27.041 "nvme_io_md": false, 00:18:27.041 "write_zeroes": true, 00:18:27.041 "zcopy": true, 00:18:27.041 "get_zone_info": false, 00:18:27.041 "zone_management": false, 00:18:27.041 "zone_append": false, 00:18:27.041 "compare": false, 00:18:27.041 "compare_and_write": false, 00:18:27.041 "abort": true, 00:18:27.041 "seek_hole": false, 00:18:27.041 "seek_data": false, 00:18:27.041 "copy": true, 00:18:27.041 "nvme_iov_md": false 00:18:27.041 }, 00:18:27.041 "memory_domains": [ 00:18:27.041 { 00:18:27.041 "dma_device_id": "system", 00:18:27.041 "dma_device_type": 1 00:18:27.041 }, 00:18:27.041 { 00:18:27.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:27.041 "dma_device_type": 2 00:18:27.041 } 00:18:27.041 ], 00:18:27.041 "driver_specific": {} 00:18:27.041 } 00:18:27.041 ] 00:18:27.041 18:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.041 18:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:27.041 18:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:27.041 18:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:27.041 18:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:27.041 18:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:27.041 18:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:27.041 18:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:27.041 18:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.041 18:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.041 18:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.041 18:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.041 18:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.041 18:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:27.041 18:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.041 18:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.041 18:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.041 18:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.041 "name": "Existed_Raid", 00:18:27.041 "uuid": "da2e3a95-2197-4362-9765-2f23715f5aa4", 00:18:27.041 "strip_size_kb": 64, 00:18:27.041 "state": "configuring", 00:18:27.041 "raid_level": "raid5f", 00:18:27.041 "superblock": true, 00:18:27.041 "num_base_bdevs": 3, 00:18:27.041 "num_base_bdevs_discovered": 1, 00:18:27.041 "num_base_bdevs_operational": 3, 00:18:27.041 "base_bdevs_list": [ 00:18:27.041 { 00:18:27.041 "name": "BaseBdev1", 00:18:27.041 "uuid": "bec045a9-ef00-44e7-9df1-f274bf5f9b0c", 00:18:27.041 "is_configured": true, 00:18:27.041 "data_offset": 2048, 00:18:27.041 "data_size": 63488 00:18:27.041 }, 00:18:27.041 { 00:18:27.041 "name": "BaseBdev2", 00:18:27.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.041 "is_configured": false, 00:18:27.041 "data_offset": 0, 00:18:27.041 "data_size": 0 00:18:27.041 }, 00:18:27.041 { 00:18:27.041 "name": "BaseBdev3", 00:18:27.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.041 "is_configured": false, 00:18:27.041 "data_offset": 0, 00:18:27.042 "data_size": 0 00:18:27.042 } 00:18:27.042 ] 00:18:27.042 }' 00:18:27.042 18:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.042 18:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.607 18:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:27.607 18:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.607 18:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.607 [2024-12-06 18:16:52.877814] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:27.607 [2024-12-06 18:16:52.878013] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:27.607 18:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.607 18:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:27.607 18:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.607 18:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.607 [2024-12-06 18:16:52.889927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:27.607 [2024-12-06 18:16:52.892540] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:27.607 [2024-12-06 18:16:52.892718] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:27.607 [2024-12-06 18:16:52.892855] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:27.607 [2024-12-06 18:16:52.892916] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:27.607 18:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.607 18:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:27.607 18:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:27.607 18:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:27.607 18:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:27.607 18:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:27.607 18:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:27.607 18:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:27.607 18:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:27.607 18:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.607 18:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.607 18:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.607 18:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.607 18:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.607 18:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:27.607 18:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.607 18:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.607 18:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.607 18:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.607 "name": "Existed_Raid", 00:18:27.607 "uuid": "d6083fca-2f38-41c3-af08-cd6dba576cc8", 00:18:27.607 "strip_size_kb": 64, 00:18:27.607 "state": "configuring", 00:18:27.607 "raid_level": "raid5f", 00:18:27.607 "superblock": true, 00:18:27.607 "num_base_bdevs": 3, 00:18:27.607 "num_base_bdevs_discovered": 1, 00:18:27.607 "num_base_bdevs_operational": 3, 00:18:27.607 "base_bdevs_list": [ 00:18:27.607 { 00:18:27.607 "name": "BaseBdev1", 00:18:27.607 "uuid": "bec045a9-ef00-44e7-9df1-f274bf5f9b0c", 00:18:27.607 "is_configured": true, 00:18:27.607 "data_offset": 2048, 00:18:27.607 "data_size": 63488 00:18:27.607 }, 00:18:27.607 { 00:18:27.607 "name": "BaseBdev2", 00:18:27.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.607 "is_configured": false, 00:18:27.607 "data_offset": 0, 00:18:27.607 "data_size": 0 00:18:27.607 }, 00:18:27.607 { 00:18:27.607 "name": "BaseBdev3", 00:18:27.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.607 "is_configured": false, 00:18:27.607 "data_offset": 0, 00:18:27.607 "data_size": 0 00:18:27.607 } 00:18:27.607 ] 00:18:27.607 }' 00:18:27.607 18:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.607 18:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.175 18:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:28.175 18:16:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.175 18:16:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.175 [2024-12-06 18:16:53.435153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:28.175 BaseBdev2 00:18:28.175 18:16:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.175 18:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:28.175 18:16:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:28.175 18:16:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:28.175 18:16:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:28.175 18:16:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:28.175 18:16:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:28.175 18:16:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:28.175 18:16:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.175 18:16:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.175 18:16:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.175 18:16:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:28.175 18:16:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.175 18:16:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.175 [ 00:18:28.175 { 00:18:28.175 "name": "BaseBdev2", 00:18:28.175 "aliases": [ 00:18:28.175 "7d56b7a4-0a72-4137-b982-cb52656fae6b" 00:18:28.175 ], 00:18:28.175 "product_name": "Malloc disk", 00:18:28.175 "block_size": 512, 00:18:28.175 "num_blocks": 65536, 00:18:28.175 "uuid": "7d56b7a4-0a72-4137-b982-cb52656fae6b", 00:18:28.175 "assigned_rate_limits": { 00:18:28.175 "rw_ios_per_sec": 0, 00:18:28.175 "rw_mbytes_per_sec": 0, 00:18:28.175 "r_mbytes_per_sec": 0, 00:18:28.175 "w_mbytes_per_sec": 0 00:18:28.175 }, 00:18:28.175 "claimed": true, 00:18:28.175 "claim_type": "exclusive_write", 00:18:28.175 "zoned": false, 00:18:28.175 "supported_io_types": { 00:18:28.175 "read": true, 00:18:28.175 "write": true, 00:18:28.175 "unmap": true, 00:18:28.175 "flush": true, 00:18:28.175 "reset": true, 00:18:28.175 "nvme_admin": false, 00:18:28.175 "nvme_io": false, 00:18:28.175 "nvme_io_md": false, 00:18:28.175 "write_zeroes": true, 00:18:28.175 "zcopy": true, 00:18:28.175 "get_zone_info": false, 00:18:28.175 "zone_management": false, 00:18:28.175 "zone_append": false, 00:18:28.175 "compare": false, 00:18:28.175 "compare_and_write": false, 00:18:28.175 "abort": true, 00:18:28.175 "seek_hole": false, 00:18:28.175 "seek_data": false, 00:18:28.175 "copy": true, 00:18:28.175 "nvme_iov_md": false 00:18:28.175 }, 00:18:28.175 "memory_domains": [ 00:18:28.175 { 00:18:28.175 "dma_device_id": "system", 00:18:28.175 "dma_device_type": 1 00:18:28.175 }, 00:18:28.175 { 00:18:28.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:28.175 "dma_device_type": 2 00:18:28.175 } 00:18:28.175 ], 00:18:28.175 "driver_specific": {} 00:18:28.175 } 00:18:28.175 ] 00:18:28.175 18:16:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.175 18:16:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:28.175 18:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:28.175 18:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:28.175 18:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:28.175 18:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:28.175 18:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:28.175 18:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:28.175 18:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:28.175 18:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:28.175 18:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.175 18:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.175 18:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.175 18:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.175 18:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.175 18:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:28.175 18:16:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.175 18:16:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.175 18:16:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.175 18:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.175 "name": "Existed_Raid", 00:18:28.175 "uuid": "d6083fca-2f38-41c3-af08-cd6dba576cc8", 00:18:28.175 "strip_size_kb": 64, 00:18:28.175 "state": "configuring", 00:18:28.175 "raid_level": "raid5f", 00:18:28.175 "superblock": true, 00:18:28.175 "num_base_bdevs": 3, 00:18:28.175 "num_base_bdevs_discovered": 2, 00:18:28.175 "num_base_bdevs_operational": 3, 00:18:28.175 "base_bdevs_list": [ 00:18:28.175 { 00:18:28.175 "name": "BaseBdev1", 00:18:28.175 "uuid": "bec045a9-ef00-44e7-9df1-f274bf5f9b0c", 00:18:28.175 "is_configured": true, 00:18:28.175 "data_offset": 2048, 00:18:28.175 "data_size": 63488 00:18:28.175 }, 00:18:28.175 { 00:18:28.175 "name": "BaseBdev2", 00:18:28.175 "uuid": "7d56b7a4-0a72-4137-b982-cb52656fae6b", 00:18:28.175 "is_configured": true, 00:18:28.175 "data_offset": 2048, 00:18:28.175 "data_size": 63488 00:18:28.175 }, 00:18:28.175 { 00:18:28.175 "name": "BaseBdev3", 00:18:28.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.175 "is_configured": false, 00:18:28.175 "data_offset": 0, 00:18:28.175 "data_size": 0 00:18:28.175 } 00:18:28.175 ] 00:18:28.175 }' 00:18:28.175 18:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.175 18:16:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.742 18:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:28.742 18:16:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.742 18:16:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.742 [2024-12-06 18:16:54.057535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:28.742 BaseBdev3 00:18:28.742 [2024-12-06 18:16:54.058048] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:28.742 [2024-12-06 18:16:54.058083] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:28.742 [2024-12-06 18:16:54.058586] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:28.742 18:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.742 18:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:18:28.742 18:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:28.742 18:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:28.742 18:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:28.742 18:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:28.742 18:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:28.742 18:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:28.742 18:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.742 18:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.742 [2024-12-06 18:16:54.064287] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:28.742 [2024-12-06 18:16:54.064449] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:28.742 [2024-12-06 18:16:54.064827] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:28.742 18:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.743 18:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:28.743 18:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.743 18:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.743 [ 00:18:28.743 { 00:18:28.743 "name": "BaseBdev3", 00:18:28.743 "aliases": [ 00:18:28.743 "c41776b2-0831-41d5-92f5-787e33615f11" 00:18:28.743 ], 00:18:28.743 "product_name": "Malloc disk", 00:18:28.743 "block_size": 512, 00:18:28.743 "num_blocks": 65536, 00:18:28.743 "uuid": "c41776b2-0831-41d5-92f5-787e33615f11", 00:18:28.743 "assigned_rate_limits": { 00:18:28.743 "rw_ios_per_sec": 0, 00:18:28.743 "rw_mbytes_per_sec": 0, 00:18:28.743 "r_mbytes_per_sec": 0, 00:18:28.743 "w_mbytes_per_sec": 0 00:18:28.743 }, 00:18:28.743 "claimed": true, 00:18:28.743 "claim_type": "exclusive_write", 00:18:28.743 "zoned": false, 00:18:28.743 "supported_io_types": { 00:18:28.743 "read": true, 00:18:28.743 "write": true, 00:18:28.743 "unmap": true, 00:18:28.743 "flush": true, 00:18:28.743 "reset": true, 00:18:28.743 "nvme_admin": false, 00:18:28.743 "nvme_io": false, 00:18:28.743 "nvme_io_md": false, 00:18:28.743 "write_zeroes": true, 00:18:28.743 "zcopy": true, 00:18:28.743 "get_zone_info": false, 00:18:28.743 "zone_management": false, 00:18:28.743 "zone_append": false, 00:18:28.743 "compare": false, 00:18:28.743 "compare_and_write": false, 00:18:28.743 "abort": true, 00:18:28.743 "seek_hole": false, 00:18:28.743 "seek_data": false, 00:18:28.743 "copy": true, 00:18:28.743 "nvme_iov_md": false 00:18:28.743 }, 00:18:28.743 "memory_domains": [ 00:18:28.743 { 00:18:28.743 "dma_device_id": "system", 00:18:28.743 "dma_device_type": 1 00:18:28.743 }, 00:18:28.743 { 00:18:28.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:28.743 "dma_device_type": 2 00:18:28.743 } 00:18:28.743 ], 00:18:28.743 "driver_specific": {} 00:18:28.743 } 00:18:28.743 ] 00:18:28.743 18:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.743 18:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:28.743 18:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:28.743 18:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:28.743 18:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:18:28.743 18:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:28.743 18:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:28.743 18:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:28.743 18:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:28.743 18:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:28.743 18:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.743 18:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.743 18:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.743 18:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.743 18:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.743 18:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.743 18:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.743 18:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:28.743 18:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.743 18:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.743 "name": "Existed_Raid", 00:18:28.743 "uuid": "d6083fca-2f38-41c3-af08-cd6dba576cc8", 00:18:28.743 "strip_size_kb": 64, 00:18:28.743 "state": "online", 00:18:28.743 "raid_level": "raid5f", 00:18:28.743 "superblock": true, 00:18:28.743 "num_base_bdevs": 3, 00:18:28.743 "num_base_bdevs_discovered": 3, 00:18:28.743 "num_base_bdevs_operational": 3, 00:18:28.743 "base_bdevs_list": [ 00:18:28.743 { 00:18:28.743 "name": "BaseBdev1", 00:18:28.743 "uuid": "bec045a9-ef00-44e7-9df1-f274bf5f9b0c", 00:18:28.743 "is_configured": true, 00:18:28.743 "data_offset": 2048, 00:18:28.743 "data_size": 63488 00:18:28.743 }, 00:18:28.743 { 00:18:28.743 "name": "BaseBdev2", 00:18:28.743 "uuid": "7d56b7a4-0a72-4137-b982-cb52656fae6b", 00:18:28.743 "is_configured": true, 00:18:28.743 "data_offset": 2048, 00:18:28.743 "data_size": 63488 00:18:28.743 }, 00:18:28.743 { 00:18:28.743 "name": "BaseBdev3", 00:18:28.743 "uuid": "c41776b2-0831-41d5-92f5-787e33615f11", 00:18:28.743 "is_configured": true, 00:18:28.743 "data_offset": 2048, 00:18:28.743 "data_size": 63488 00:18:28.743 } 00:18:28.743 ] 00:18:28.743 }' 00:18:28.743 18:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.743 18:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.310 18:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:29.310 18:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:29.310 18:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:29.310 18:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:29.310 18:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:29.310 18:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:29.310 18:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:29.310 18:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.310 18:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.311 18:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:29.311 [2024-12-06 18:16:54.635242] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:29.311 18:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.311 18:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:29.311 "name": "Existed_Raid", 00:18:29.311 "aliases": [ 00:18:29.311 "d6083fca-2f38-41c3-af08-cd6dba576cc8" 00:18:29.311 ], 00:18:29.311 "product_name": "Raid Volume", 00:18:29.311 "block_size": 512, 00:18:29.311 "num_blocks": 126976, 00:18:29.311 "uuid": "d6083fca-2f38-41c3-af08-cd6dba576cc8", 00:18:29.311 "assigned_rate_limits": { 00:18:29.311 "rw_ios_per_sec": 0, 00:18:29.311 "rw_mbytes_per_sec": 0, 00:18:29.311 "r_mbytes_per_sec": 0, 00:18:29.311 "w_mbytes_per_sec": 0 00:18:29.311 }, 00:18:29.311 "claimed": false, 00:18:29.311 "zoned": false, 00:18:29.311 "supported_io_types": { 00:18:29.311 "read": true, 00:18:29.311 "write": true, 00:18:29.311 "unmap": false, 00:18:29.311 "flush": false, 00:18:29.311 "reset": true, 00:18:29.311 "nvme_admin": false, 00:18:29.311 "nvme_io": false, 00:18:29.311 "nvme_io_md": false, 00:18:29.311 "write_zeroes": true, 00:18:29.311 "zcopy": false, 00:18:29.311 "get_zone_info": false, 00:18:29.311 "zone_management": false, 00:18:29.311 "zone_append": false, 00:18:29.311 "compare": false, 00:18:29.311 "compare_and_write": false, 00:18:29.311 "abort": false, 00:18:29.311 "seek_hole": false, 00:18:29.311 "seek_data": false, 00:18:29.311 "copy": false, 00:18:29.311 "nvme_iov_md": false 00:18:29.311 }, 00:18:29.311 "driver_specific": { 00:18:29.311 "raid": { 00:18:29.311 "uuid": "d6083fca-2f38-41c3-af08-cd6dba576cc8", 00:18:29.311 "strip_size_kb": 64, 00:18:29.311 "state": "online", 00:18:29.311 "raid_level": "raid5f", 00:18:29.311 "superblock": true, 00:18:29.311 "num_base_bdevs": 3, 00:18:29.311 "num_base_bdevs_discovered": 3, 00:18:29.311 "num_base_bdevs_operational": 3, 00:18:29.311 "base_bdevs_list": [ 00:18:29.311 { 00:18:29.311 "name": "BaseBdev1", 00:18:29.311 "uuid": "bec045a9-ef00-44e7-9df1-f274bf5f9b0c", 00:18:29.311 "is_configured": true, 00:18:29.311 "data_offset": 2048, 00:18:29.311 "data_size": 63488 00:18:29.311 }, 00:18:29.311 { 00:18:29.311 "name": "BaseBdev2", 00:18:29.311 "uuid": "7d56b7a4-0a72-4137-b982-cb52656fae6b", 00:18:29.311 "is_configured": true, 00:18:29.311 "data_offset": 2048, 00:18:29.311 "data_size": 63488 00:18:29.311 }, 00:18:29.311 { 00:18:29.311 "name": "BaseBdev3", 00:18:29.311 "uuid": "c41776b2-0831-41d5-92f5-787e33615f11", 00:18:29.311 "is_configured": true, 00:18:29.311 "data_offset": 2048, 00:18:29.311 "data_size": 63488 00:18:29.311 } 00:18:29.311 ] 00:18:29.311 } 00:18:29.311 } 00:18:29.311 }' 00:18:29.311 18:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:29.311 18:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:29.311 BaseBdev2 00:18:29.311 BaseBdev3' 00:18:29.311 18:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:29.311 18:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:29.311 18:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:29.311 18:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:29.311 18:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.311 18:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.311 18:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:29.311 18:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.570 18:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:29.570 18:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:29.570 18:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:29.570 18:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:29.570 18:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.570 18:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.570 18:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:29.570 18:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.570 18:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:29.570 18:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:29.570 18:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:29.570 18:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:29.570 18:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:29.570 18:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.570 18:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.570 18:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.570 18:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:29.570 18:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:29.570 18:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:29.570 18:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.570 18:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.570 [2024-12-06 18:16:54.955122] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:29.570 18:16:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.570 18:16:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:29.570 18:16:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:18:29.570 18:16:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:29.570 18:16:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:18:29.570 18:16:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:29.570 18:16:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:18:29.570 18:16:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:29.570 18:16:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:29.570 18:16:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:29.570 18:16:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:29.570 18:16:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:29.570 18:16:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.570 18:16:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.570 18:16:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.570 18:16:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.570 18:16:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.570 18:16:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.570 18:16:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.570 18:16:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:29.570 18:16:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.830 18:16:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.830 "name": "Existed_Raid", 00:18:29.830 "uuid": "d6083fca-2f38-41c3-af08-cd6dba576cc8", 00:18:29.830 "strip_size_kb": 64, 00:18:29.830 "state": "online", 00:18:29.830 "raid_level": "raid5f", 00:18:29.830 "superblock": true, 00:18:29.830 "num_base_bdevs": 3, 00:18:29.830 "num_base_bdevs_discovered": 2, 00:18:29.830 "num_base_bdevs_operational": 2, 00:18:29.830 "base_bdevs_list": [ 00:18:29.830 { 00:18:29.830 "name": null, 00:18:29.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.830 "is_configured": false, 00:18:29.830 "data_offset": 0, 00:18:29.830 "data_size": 63488 00:18:29.830 }, 00:18:29.830 { 00:18:29.830 "name": "BaseBdev2", 00:18:29.830 "uuid": "7d56b7a4-0a72-4137-b982-cb52656fae6b", 00:18:29.830 "is_configured": true, 00:18:29.830 "data_offset": 2048, 00:18:29.830 "data_size": 63488 00:18:29.830 }, 00:18:29.830 { 00:18:29.830 "name": "BaseBdev3", 00:18:29.831 "uuid": "c41776b2-0831-41d5-92f5-787e33615f11", 00:18:29.831 "is_configured": true, 00:18:29.831 "data_offset": 2048, 00:18:29.831 "data_size": 63488 00:18:29.831 } 00:18:29.831 ] 00:18:29.831 }' 00:18:29.831 18:16:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.831 18:16:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.089 18:16:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:30.089 18:16:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:30.089 18:16:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.089 18:16:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:30.089 18:16:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.089 18:16:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.089 18:16:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.347 18:16:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:30.347 18:16:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:30.347 18:16:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:30.347 18:16:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.347 18:16:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.347 [2024-12-06 18:16:55.629688] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:30.347 [2024-12-06 18:16:55.630036] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:30.347 [2024-12-06 18:16:55.717912] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:30.347 18:16:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.347 18:16:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:30.348 18:16:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:30.348 18:16:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.348 18:16:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:30.348 18:16:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.348 18:16:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.348 18:16:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.348 18:16:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:30.348 18:16:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:30.348 18:16:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:30.348 18:16:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.348 18:16:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.348 [2024-12-06 18:16:55.778010] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:30.348 [2024-12-06 18:16:55.778191] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:30.606 18:16:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.606 18:16:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:30.606 18:16:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:30.606 18:16:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.606 18:16:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:30.606 18:16:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.606 18:16:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.606 18:16:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.606 18:16:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:30.606 18:16:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:30.606 18:16:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:18:30.606 18:16:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:30.606 18:16:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:30.606 18:16:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:30.606 18:16:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.606 18:16:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.606 BaseBdev2 00:18:30.606 18:16:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.606 18:16:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:30.606 18:16:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:30.606 18:16:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:30.606 18:16:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:30.606 18:16:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:30.606 18:16:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:30.606 18:16:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:30.606 18:16:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.606 18:16:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.606 18:16:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.606 18:16:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:30.606 18:16:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.607 18:16:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.607 [ 00:18:30.607 { 00:18:30.607 "name": "BaseBdev2", 00:18:30.607 "aliases": [ 00:18:30.607 "45e97a37-75f1-4ea6-a595-891782ba7059" 00:18:30.607 ], 00:18:30.607 "product_name": "Malloc disk", 00:18:30.607 "block_size": 512, 00:18:30.607 "num_blocks": 65536, 00:18:30.607 "uuid": "45e97a37-75f1-4ea6-a595-891782ba7059", 00:18:30.607 "assigned_rate_limits": { 00:18:30.607 "rw_ios_per_sec": 0, 00:18:30.607 "rw_mbytes_per_sec": 0, 00:18:30.607 "r_mbytes_per_sec": 0, 00:18:30.607 "w_mbytes_per_sec": 0 00:18:30.607 }, 00:18:30.607 "claimed": false, 00:18:30.607 "zoned": false, 00:18:30.607 "supported_io_types": { 00:18:30.607 "read": true, 00:18:30.607 "write": true, 00:18:30.607 "unmap": true, 00:18:30.607 "flush": true, 00:18:30.607 "reset": true, 00:18:30.607 "nvme_admin": false, 00:18:30.607 "nvme_io": false, 00:18:30.607 "nvme_io_md": false, 00:18:30.607 "write_zeroes": true, 00:18:30.607 "zcopy": true, 00:18:30.607 "get_zone_info": false, 00:18:30.607 "zone_management": false, 00:18:30.607 "zone_append": false, 00:18:30.607 "compare": false, 00:18:30.607 "compare_and_write": false, 00:18:30.607 "abort": true, 00:18:30.607 "seek_hole": false, 00:18:30.607 "seek_data": false, 00:18:30.607 "copy": true, 00:18:30.607 "nvme_iov_md": false 00:18:30.607 }, 00:18:30.607 "memory_domains": [ 00:18:30.607 { 00:18:30.607 "dma_device_id": "system", 00:18:30.607 "dma_device_type": 1 00:18:30.607 }, 00:18:30.607 { 00:18:30.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:30.607 "dma_device_type": 2 00:18:30.607 } 00:18:30.607 ], 00:18:30.607 "driver_specific": {} 00:18:30.607 } 00:18:30.607 ] 00:18:30.607 18:16:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.607 18:16:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:30.607 18:16:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:30.607 18:16:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:30.607 18:16:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:30.607 18:16:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.607 18:16:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.607 BaseBdev3 00:18:30.607 18:16:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.607 18:16:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:30.607 18:16:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:30.607 18:16:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:30.607 18:16:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:30.607 18:16:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:30.607 18:16:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:30.607 18:16:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:30.607 18:16:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.607 18:16:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.607 18:16:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.607 18:16:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:30.607 18:16:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.607 18:16:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.607 [ 00:18:30.607 { 00:18:30.607 "name": "BaseBdev3", 00:18:30.607 "aliases": [ 00:18:30.607 "022e02a9-2403-45bc-9602-7b1b1c55c801" 00:18:30.607 ], 00:18:30.607 "product_name": "Malloc disk", 00:18:30.607 "block_size": 512, 00:18:30.607 "num_blocks": 65536, 00:18:30.607 "uuid": "022e02a9-2403-45bc-9602-7b1b1c55c801", 00:18:30.607 "assigned_rate_limits": { 00:18:30.607 "rw_ios_per_sec": 0, 00:18:30.607 "rw_mbytes_per_sec": 0, 00:18:30.607 "r_mbytes_per_sec": 0, 00:18:30.607 "w_mbytes_per_sec": 0 00:18:30.607 }, 00:18:30.607 "claimed": false, 00:18:30.607 "zoned": false, 00:18:30.607 "supported_io_types": { 00:18:30.607 "read": true, 00:18:30.607 "write": true, 00:18:30.607 "unmap": true, 00:18:30.607 "flush": true, 00:18:30.607 "reset": true, 00:18:30.607 "nvme_admin": false, 00:18:30.607 "nvme_io": false, 00:18:30.607 "nvme_io_md": false, 00:18:30.607 "write_zeroes": true, 00:18:30.607 "zcopy": true, 00:18:30.607 "get_zone_info": false, 00:18:30.607 "zone_management": false, 00:18:30.607 "zone_append": false, 00:18:30.607 "compare": false, 00:18:30.607 "compare_and_write": false, 00:18:30.607 "abort": true, 00:18:30.607 "seek_hole": false, 00:18:30.607 "seek_data": false, 00:18:30.607 "copy": true, 00:18:30.607 "nvme_iov_md": false 00:18:30.607 }, 00:18:30.607 "memory_domains": [ 00:18:30.607 { 00:18:30.607 "dma_device_id": "system", 00:18:30.607 "dma_device_type": 1 00:18:30.607 }, 00:18:30.607 { 00:18:30.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:30.607 "dma_device_type": 2 00:18:30.607 } 00:18:30.607 ], 00:18:30.607 "driver_specific": {} 00:18:30.607 } 00:18:30.607 ] 00:18:30.607 18:16:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.607 18:16:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:30.607 18:16:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:30.607 18:16:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:30.607 18:16:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:30.607 18:16:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.607 18:16:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.607 [2024-12-06 18:16:56.061406] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:30.607 [2024-12-06 18:16:56.061465] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:30.607 [2024-12-06 18:16:56.061498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:30.607 [2024-12-06 18:16:56.063884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:30.607 18:16:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.607 18:16:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:30.607 18:16:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:30.607 18:16:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:30.607 18:16:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:30.607 18:16:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:30.607 18:16:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:30.607 18:16:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.607 18:16:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.607 18:16:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.607 18:16:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.607 18:16:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.607 18:16:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:30.607 18:16:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.607 18:16:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.607 18:16:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.607 18:16:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.607 "name": "Existed_Raid", 00:18:30.607 "uuid": "2c4b6b1d-39ea-4734-823d-29292741b212", 00:18:30.607 "strip_size_kb": 64, 00:18:30.607 "state": "configuring", 00:18:30.607 "raid_level": "raid5f", 00:18:30.607 "superblock": true, 00:18:30.607 "num_base_bdevs": 3, 00:18:30.607 "num_base_bdevs_discovered": 2, 00:18:30.607 "num_base_bdevs_operational": 3, 00:18:30.607 "base_bdevs_list": [ 00:18:30.607 { 00:18:30.607 "name": "BaseBdev1", 00:18:30.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.607 "is_configured": false, 00:18:30.607 "data_offset": 0, 00:18:30.607 "data_size": 0 00:18:30.607 }, 00:18:30.607 { 00:18:30.607 "name": "BaseBdev2", 00:18:30.607 "uuid": "45e97a37-75f1-4ea6-a595-891782ba7059", 00:18:30.607 "is_configured": true, 00:18:30.607 "data_offset": 2048, 00:18:30.607 "data_size": 63488 00:18:30.607 }, 00:18:30.607 { 00:18:30.607 "name": "BaseBdev3", 00:18:30.607 "uuid": "022e02a9-2403-45bc-9602-7b1b1c55c801", 00:18:30.607 "is_configured": true, 00:18:30.607 "data_offset": 2048, 00:18:30.607 "data_size": 63488 00:18:30.607 } 00:18:30.607 ] 00:18:30.607 }' 00:18:30.607 18:16:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.607 18:16:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.174 18:16:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:31.174 18:16:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.174 18:16:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.174 [2024-12-06 18:16:56.589616] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:31.174 18:16:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.174 18:16:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:31.174 18:16:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:31.174 18:16:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:31.174 18:16:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:31.174 18:16:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:31.174 18:16:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:31.174 18:16:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.174 18:16:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.174 18:16:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.174 18:16:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.174 18:16:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.174 18:16:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.174 18:16:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.174 18:16:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:31.174 18:16:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.174 18:16:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.174 "name": "Existed_Raid", 00:18:31.175 "uuid": "2c4b6b1d-39ea-4734-823d-29292741b212", 00:18:31.175 "strip_size_kb": 64, 00:18:31.175 "state": "configuring", 00:18:31.175 "raid_level": "raid5f", 00:18:31.175 "superblock": true, 00:18:31.175 "num_base_bdevs": 3, 00:18:31.175 "num_base_bdevs_discovered": 1, 00:18:31.175 "num_base_bdevs_operational": 3, 00:18:31.175 "base_bdevs_list": [ 00:18:31.175 { 00:18:31.175 "name": "BaseBdev1", 00:18:31.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.175 "is_configured": false, 00:18:31.175 "data_offset": 0, 00:18:31.175 "data_size": 0 00:18:31.175 }, 00:18:31.175 { 00:18:31.175 "name": null, 00:18:31.175 "uuid": "45e97a37-75f1-4ea6-a595-891782ba7059", 00:18:31.175 "is_configured": false, 00:18:31.175 "data_offset": 0, 00:18:31.175 "data_size": 63488 00:18:31.175 }, 00:18:31.175 { 00:18:31.175 "name": "BaseBdev3", 00:18:31.175 "uuid": "022e02a9-2403-45bc-9602-7b1b1c55c801", 00:18:31.175 "is_configured": true, 00:18:31.175 "data_offset": 2048, 00:18:31.175 "data_size": 63488 00:18:31.175 } 00:18:31.175 ] 00:18:31.175 }' 00:18:31.175 18:16:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.175 18:16:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.743 18:16:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.743 18:16:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:31.743 18:16:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.743 18:16:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.743 18:16:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.743 18:16:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:31.743 18:16:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:31.743 18:16:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.743 18:16:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.743 [2024-12-06 18:16:57.224324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:31.743 BaseBdev1 00:18:31.743 18:16:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.743 18:16:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:31.743 18:16:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:31.743 18:16:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:31.743 18:16:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:31.743 18:16:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:31.743 18:16:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:31.743 18:16:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:31.743 18:16:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.743 18:16:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.743 18:16:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.743 18:16:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:31.743 18:16:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.743 18:16:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.743 [ 00:18:31.743 { 00:18:31.743 "name": "BaseBdev1", 00:18:31.743 "aliases": [ 00:18:31.743 "b523d994-4548-4867-bd17-db5ee1c0fb58" 00:18:31.743 ], 00:18:31.743 "product_name": "Malloc disk", 00:18:31.743 "block_size": 512, 00:18:31.743 "num_blocks": 65536, 00:18:31.743 "uuid": "b523d994-4548-4867-bd17-db5ee1c0fb58", 00:18:31.743 "assigned_rate_limits": { 00:18:31.743 "rw_ios_per_sec": 0, 00:18:31.743 "rw_mbytes_per_sec": 0, 00:18:31.743 "r_mbytes_per_sec": 0, 00:18:31.743 "w_mbytes_per_sec": 0 00:18:31.743 }, 00:18:31.743 "claimed": true, 00:18:31.743 "claim_type": "exclusive_write", 00:18:31.743 "zoned": false, 00:18:31.743 "supported_io_types": { 00:18:31.743 "read": true, 00:18:31.743 "write": true, 00:18:31.743 "unmap": true, 00:18:31.743 "flush": true, 00:18:31.743 "reset": true, 00:18:31.743 "nvme_admin": false, 00:18:31.743 "nvme_io": false, 00:18:31.743 "nvme_io_md": false, 00:18:31.743 "write_zeroes": true, 00:18:31.743 "zcopy": true, 00:18:31.743 "get_zone_info": false, 00:18:31.743 "zone_management": false, 00:18:31.743 "zone_append": false, 00:18:31.743 "compare": false, 00:18:31.743 "compare_and_write": false, 00:18:31.743 "abort": true, 00:18:31.743 "seek_hole": false, 00:18:31.743 "seek_data": false, 00:18:31.743 "copy": true, 00:18:31.743 "nvme_iov_md": false 00:18:31.743 }, 00:18:31.743 "memory_domains": [ 00:18:31.743 { 00:18:31.743 "dma_device_id": "system", 00:18:31.743 "dma_device_type": 1 00:18:31.743 }, 00:18:31.743 { 00:18:31.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:31.743 "dma_device_type": 2 00:18:31.743 } 00:18:31.743 ], 00:18:31.743 "driver_specific": {} 00:18:31.743 } 00:18:31.743 ] 00:18:31.743 18:16:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.743 18:16:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:31.743 18:16:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:31.743 18:16:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:31.743 18:16:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:31.743 18:16:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:31.743 18:16:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:31.743 18:16:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:31.743 18:16:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.743 18:16:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.743 18:16:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.743 18:16:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.743 18:16:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.743 18:16:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.743 18:16:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.743 18:16:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:32.002 18:16:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.002 18:16:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.002 "name": "Existed_Raid", 00:18:32.002 "uuid": "2c4b6b1d-39ea-4734-823d-29292741b212", 00:18:32.002 "strip_size_kb": 64, 00:18:32.002 "state": "configuring", 00:18:32.002 "raid_level": "raid5f", 00:18:32.002 "superblock": true, 00:18:32.002 "num_base_bdevs": 3, 00:18:32.002 "num_base_bdevs_discovered": 2, 00:18:32.002 "num_base_bdevs_operational": 3, 00:18:32.002 "base_bdevs_list": [ 00:18:32.002 { 00:18:32.002 "name": "BaseBdev1", 00:18:32.002 "uuid": "b523d994-4548-4867-bd17-db5ee1c0fb58", 00:18:32.002 "is_configured": true, 00:18:32.002 "data_offset": 2048, 00:18:32.002 "data_size": 63488 00:18:32.002 }, 00:18:32.002 { 00:18:32.002 "name": null, 00:18:32.002 "uuid": "45e97a37-75f1-4ea6-a595-891782ba7059", 00:18:32.002 "is_configured": false, 00:18:32.002 "data_offset": 0, 00:18:32.002 "data_size": 63488 00:18:32.002 }, 00:18:32.002 { 00:18:32.002 "name": "BaseBdev3", 00:18:32.002 "uuid": "022e02a9-2403-45bc-9602-7b1b1c55c801", 00:18:32.002 "is_configured": true, 00:18:32.002 "data_offset": 2048, 00:18:32.002 "data_size": 63488 00:18:32.002 } 00:18:32.002 ] 00:18:32.002 }' 00:18:32.002 18:16:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.002 18:16:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.571 18:16:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:32.571 18:16:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.571 18:16:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.571 18:16:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.571 18:16:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.571 18:16:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:32.571 18:16:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:32.571 18:16:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.571 18:16:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.571 [2024-12-06 18:16:57.848559] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:32.571 18:16:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.571 18:16:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:32.571 18:16:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:32.571 18:16:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:32.571 18:16:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:32.571 18:16:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:32.571 18:16:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:32.571 18:16:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.571 18:16:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.571 18:16:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.571 18:16:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.571 18:16:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.571 18:16:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.571 18:16:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:32.571 18:16:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.571 18:16:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.571 18:16:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.571 "name": "Existed_Raid", 00:18:32.571 "uuid": "2c4b6b1d-39ea-4734-823d-29292741b212", 00:18:32.571 "strip_size_kb": 64, 00:18:32.571 "state": "configuring", 00:18:32.571 "raid_level": "raid5f", 00:18:32.571 "superblock": true, 00:18:32.571 "num_base_bdevs": 3, 00:18:32.571 "num_base_bdevs_discovered": 1, 00:18:32.571 "num_base_bdevs_operational": 3, 00:18:32.571 "base_bdevs_list": [ 00:18:32.571 { 00:18:32.571 "name": "BaseBdev1", 00:18:32.571 "uuid": "b523d994-4548-4867-bd17-db5ee1c0fb58", 00:18:32.571 "is_configured": true, 00:18:32.571 "data_offset": 2048, 00:18:32.571 "data_size": 63488 00:18:32.571 }, 00:18:32.571 { 00:18:32.571 "name": null, 00:18:32.571 "uuid": "45e97a37-75f1-4ea6-a595-891782ba7059", 00:18:32.571 "is_configured": false, 00:18:32.571 "data_offset": 0, 00:18:32.571 "data_size": 63488 00:18:32.571 }, 00:18:32.571 { 00:18:32.571 "name": null, 00:18:32.571 "uuid": "022e02a9-2403-45bc-9602-7b1b1c55c801", 00:18:32.571 "is_configured": false, 00:18:32.571 "data_offset": 0, 00:18:32.571 "data_size": 63488 00:18:32.571 } 00:18:32.571 ] 00:18:32.571 }' 00:18:32.571 18:16:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.571 18:16:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.141 18:16:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:33.141 18:16:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.141 18:16:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.141 18:16:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.141 18:16:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.141 18:16:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:33.141 18:16:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:33.141 18:16:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.141 18:16:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.141 [2024-12-06 18:16:58.420749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:33.141 18:16:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.141 18:16:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:33.141 18:16:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:33.141 18:16:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:33.141 18:16:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:33.141 18:16:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:33.141 18:16:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:33.141 18:16:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.141 18:16:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.141 18:16:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.141 18:16:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.141 18:16:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.141 18:16:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.141 18:16:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.141 18:16:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:33.141 18:16:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.141 18:16:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.141 "name": "Existed_Raid", 00:18:33.141 "uuid": "2c4b6b1d-39ea-4734-823d-29292741b212", 00:18:33.141 "strip_size_kb": 64, 00:18:33.141 "state": "configuring", 00:18:33.141 "raid_level": "raid5f", 00:18:33.141 "superblock": true, 00:18:33.141 "num_base_bdevs": 3, 00:18:33.141 "num_base_bdevs_discovered": 2, 00:18:33.141 "num_base_bdevs_operational": 3, 00:18:33.141 "base_bdevs_list": [ 00:18:33.141 { 00:18:33.141 "name": "BaseBdev1", 00:18:33.141 "uuid": "b523d994-4548-4867-bd17-db5ee1c0fb58", 00:18:33.141 "is_configured": true, 00:18:33.141 "data_offset": 2048, 00:18:33.141 "data_size": 63488 00:18:33.141 }, 00:18:33.141 { 00:18:33.141 "name": null, 00:18:33.141 "uuid": "45e97a37-75f1-4ea6-a595-891782ba7059", 00:18:33.141 "is_configured": false, 00:18:33.141 "data_offset": 0, 00:18:33.141 "data_size": 63488 00:18:33.141 }, 00:18:33.141 { 00:18:33.141 "name": "BaseBdev3", 00:18:33.141 "uuid": "022e02a9-2403-45bc-9602-7b1b1c55c801", 00:18:33.141 "is_configured": true, 00:18:33.141 "data_offset": 2048, 00:18:33.141 "data_size": 63488 00:18:33.141 } 00:18:33.141 ] 00:18:33.141 }' 00:18:33.141 18:16:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.141 18:16:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.709 18:16:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:33.709 18:16:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.709 18:16:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.709 18:16:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.709 18:16:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.709 18:16:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:33.709 18:16:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:33.709 18:16:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.709 18:16:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.710 [2024-12-06 18:16:59.004922] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:33.710 18:16:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.710 18:16:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:33.710 18:16:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:33.710 18:16:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:33.710 18:16:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:33.710 18:16:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:33.710 18:16:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:33.710 18:16:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.710 18:16:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.710 18:16:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.710 18:16:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.710 18:16:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.710 18:16:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:33.710 18:16:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.710 18:16:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.710 18:16:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.710 18:16:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.710 "name": "Existed_Raid", 00:18:33.710 "uuid": "2c4b6b1d-39ea-4734-823d-29292741b212", 00:18:33.710 "strip_size_kb": 64, 00:18:33.710 "state": "configuring", 00:18:33.710 "raid_level": "raid5f", 00:18:33.710 "superblock": true, 00:18:33.710 "num_base_bdevs": 3, 00:18:33.710 "num_base_bdevs_discovered": 1, 00:18:33.710 "num_base_bdevs_operational": 3, 00:18:33.710 "base_bdevs_list": [ 00:18:33.710 { 00:18:33.710 "name": null, 00:18:33.710 "uuid": "b523d994-4548-4867-bd17-db5ee1c0fb58", 00:18:33.710 "is_configured": false, 00:18:33.710 "data_offset": 0, 00:18:33.710 "data_size": 63488 00:18:33.710 }, 00:18:33.710 { 00:18:33.710 "name": null, 00:18:33.710 "uuid": "45e97a37-75f1-4ea6-a595-891782ba7059", 00:18:33.710 "is_configured": false, 00:18:33.710 "data_offset": 0, 00:18:33.710 "data_size": 63488 00:18:33.710 }, 00:18:33.710 { 00:18:33.710 "name": "BaseBdev3", 00:18:33.710 "uuid": "022e02a9-2403-45bc-9602-7b1b1c55c801", 00:18:33.710 "is_configured": true, 00:18:33.710 "data_offset": 2048, 00:18:33.710 "data_size": 63488 00:18:33.710 } 00:18:33.710 ] 00:18:33.710 }' 00:18:33.710 18:16:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.710 18:16:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.281 18:16:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.281 18:16:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.281 18:16:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:34.281 18:16:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.281 18:16:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.281 18:16:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:18:34.281 18:16:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:34.281 18:16:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.281 18:16:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.281 [2024-12-06 18:16:59.692831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:34.281 18:16:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.281 18:16:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:34.281 18:16:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:34.281 18:16:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:34.281 18:16:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:34.281 18:16:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:34.281 18:16:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:34.281 18:16:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:34.281 18:16:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:34.281 18:16:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:34.281 18:16:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:34.281 18:16:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.281 18:16:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.281 18:16:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.281 18:16:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:34.281 18:16:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.281 18:16:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:34.281 "name": "Existed_Raid", 00:18:34.281 "uuid": "2c4b6b1d-39ea-4734-823d-29292741b212", 00:18:34.281 "strip_size_kb": 64, 00:18:34.281 "state": "configuring", 00:18:34.281 "raid_level": "raid5f", 00:18:34.281 "superblock": true, 00:18:34.281 "num_base_bdevs": 3, 00:18:34.281 "num_base_bdevs_discovered": 2, 00:18:34.281 "num_base_bdevs_operational": 3, 00:18:34.281 "base_bdevs_list": [ 00:18:34.281 { 00:18:34.281 "name": null, 00:18:34.281 "uuid": "b523d994-4548-4867-bd17-db5ee1c0fb58", 00:18:34.281 "is_configured": false, 00:18:34.281 "data_offset": 0, 00:18:34.281 "data_size": 63488 00:18:34.281 }, 00:18:34.281 { 00:18:34.281 "name": "BaseBdev2", 00:18:34.281 "uuid": "45e97a37-75f1-4ea6-a595-891782ba7059", 00:18:34.281 "is_configured": true, 00:18:34.281 "data_offset": 2048, 00:18:34.281 "data_size": 63488 00:18:34.281 }, 00:18:34.281 { 00:18:34.281 "name": "BaseBdev3", 00:18:34.281 "uuid": "022e02a9-2403-45bc-9602-7b1b1c55c801", 00:18:34.281 "is_configured": true, 00:18:34.281 "data_offset": 2048, 00:18:34.281 "data_size": 63488 00:18:34.281 } 00:18:34.281 ] 00:18:34.281 }' 00:18:34.281 18:16:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:34.281 18:16:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.849 18:17:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.849 18:17:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.849 18:17:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.849 18:17:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:34.849 18:17:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.849 18:17:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:34.849 18:17:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:34.849 18:17:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.849 18:17:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.849 18:17:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.849 18:17:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.849 18:17:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b523d994-4548-4867-bd17-db5ee1c0fb58 00:18:34.849 18:17:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.849 18:17:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.108 [2024-12-06 18:17:00.370711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:35.108 [2024-12-06 18:17:00.371034] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:35.108 [2024-12-06 18:17:00.371059] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:35.108 [2024-12-06 18:17:00.371361] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:35.108 NewBaseBdev 00:18:35.108 18:17:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.108 18:17:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:18:35.108 18:17:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:18:35.108 18:17:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:35.108 18:17:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:35.108 18:17:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:35.108 18:17:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:35.108 18:17:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:35.108 18:17:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.108 18:17:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.108 [2024-12-06 18:17:00.376239] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:35.108 [2024-12-06 18:17:00.376269] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:18:35.108 [2024-12-06 18:17:00.376577] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:35.108 18:17:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.108 18:17:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:35.108 18:17:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.108 18:17:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.108 [ 00:18:35.108 { 00:18:35.108 "name": "NewBaseBdev", 00:18:35.108 "aliases": [ 00:18:35.108 "b523d994-4548-4867-bd17-db5ee1c0fb58" 00:18:35.108 ], 00:18:35.108 "product_name": "Malloc disk", 00:18:35.108 "block_size": 512, 00:18:35.108 "num_blocks": 65536, 00:18:35.108 "uuid": "b523d994-4548-4867-bd17-db5ee1c0fb58", 00:18:35.108 "assigned_rate_limits": { 00:18:35.108 "rw_ios_per_sec": 0, 00:18:35.108 "rw_mbytes_per_sec": 0, 00:18:35.108 "r_mbytes_per_sec": 0, 00:18:35.108 "w_mbytes_per_sec": 0 00:18:35.108 }, 00:18:35.108 "claimed": true, 00:18:35.108 "claim_type": "exclusive_write", 00:18:35.108 "zoned": false, 00:18:35.108 "supported_io_types": { 00:18:35.108 "read": true, 00:18:35.108 "write": true, 00:18:35.108 "unmap": true, 00:18:35.108 "flush": true, 00:18:35.108 "reset": true, 00:18:35.108 "nvme_admin": false, 00:18:35.108 "nvme_io": false, 00:18:35.108 "nvme_io_md": false, 00:18:35.108 "write_zeroes": true, 00:18:35.108 "zcopy": true, 00:18:35.108 "get_zone_info": false, 00:18:35.108 "zone_management": false, 00:18:35.108 "zone_append": false, 00:18:35.108 "compare": false, 00:18:35.108 "compare_and_write": false, 00:18:35.108 "abort": true, 00:18:35.108 "seek_hole": false, 00:18:35.108 "seek_data": false, 00:18:35.108 "copy": true, 00:18:35.108 "nvme_iov_md": false 00:18:35.108 }, 00:18:35.108 "memory_domains": [ 00:18:35.108 { 00:18:35.108 "dma_device_id": "system", 00:18:35.108 "dma_device_type": 1 00:18:35.108 }, 00:18:35.108 { 00:18:35.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:35.108 "dma_device_type": 2 00:18:35.108 } 00:18:35.108 ], 00:18:35.108 "driver_specific": {} 00:18:35.108 } 00:18:35.108 ] 00:18:35.108 18:17:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.108 18:17:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:35.108 18:17:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:18:35.108 18:17:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:35.108 18:17:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:35.108 18:17:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:35.108 18:17:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:35.108 18:17:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:35.108 18:17:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.108 18:17:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.108 18:17:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.108 18:17:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.108 18:17:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.108 18:17:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.108 18:17:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.108 18:17:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:35.108 18:17:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.108 18:17:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.108 "name": "Existed_Raid", 00:18:35.108 "uuid": "2c4b6b1d-39ea-4734-823d-29292741b212", 00:18:35.108 "strip_size_kb": 64, 00:18:35.108 "state": "online", 00:18:35.108 "raid_level": "raid5f", 00:18:35.108 "superblock": true, 00:18:35.108 "num_base_bdevs": 3, 00:18:35.108 "num_base_bdevs_discovered": 3, 00:18:35.108 "num_base_bdevs_operational": 3, 00:18:35.108 "base_bdevs_list": [ 00:18:35.108 { 00:18:35.108 "name": "NewBaseBdev", 00:18:35.108 "uuid": "b523d994-4548-4867-bd17-db5ee1c0fb58", 00:18:35.108 "is_configured": true, 00:18:35.108 "data_offset": 2048, 00:18:35.108 "data_size": 63488 00:18:35.108 }, 00:18:35.108 { 00:18:35.108 "name": "BaseBdev2", 00:18:35.108 "uuid": "45e97a37-75f1-4ea6-a595-891782ba7059", 00:18:35.108 "is_configured": true, 00:18:35.108 "data_offset": 2048, 00:18:35.108 "data_size": 63488 00:18:35.108 }, 00:18:35.108 { 00:18:35.108 "name": "BaseBdev3", 00:18:35.108 "uuid": "022e02a9-2403-45bc-9602-7b1b1c55c801", 00:18:35.108 "is_configured": true, 00:18:35.108 "data_offset": 2048, 00:18:35.108 "data_size": 63488 00:18:35.108 } 00:18:35.108 ] 00:18:35.108 }' 00:18:35.108 18:17:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.108 18:17:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.676 18:17:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:18:35.676 18:17:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:35.676 18:17:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:35.676 18:17:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:35.676 18:17:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:35.676 18:17:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:35.676 18:17:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:35.676 18:17:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:35.676 18:17:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.676 18:17:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.676 [2024-12-06 18:17:00.962540] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:35.676 18:17:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.676 18:17:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:35.676 "name": "Existed_Raid", 00:18:35.676 "aliases": [ 00:18:35.676 "2c4b6b1d-39ea-4734-823d-29292741b212" 00:18:35.676 ], 00:18:35.676 "product_name": "Raid Volume", 00:18:35.676 "block_size": 512, 00:18:35.676 "num_blocks": 126976, 00:18:35.676 "uuid": "2c4b6b1d-39ea-4734-823d-29292741b212", 00:18:35.676 "assigned_rate_limits": { 00:18:35.676 "rw_ios_per_sec": 0, 00:18:35.676 "rw_mbytes_per_sec": 0, 00:18:35.676 "r_mbytes_per_sec": 0, 00:18:35.676 "w_mbytes_per_sec": 0 00:18:35.676 }, 00:18:35.676 "claimed": false, 00:18:35.676 "zoned": false, 00:18:35.676 "supported_io_types": { 00:18:35.676 "read": true, 00:18:35.676 "write": true, 00:18:35.676 "unmap": false, 00:18:35.676 "flush": false, 00:18:35.676 "reset": true, 00:18:35.676 "nvme_admin": false, 00:18:35.676 "nvme_io": false, 00:18:35.676 "nvme_io_md": false, 00:18:35.676 "write_zeroes": true, 00:18:35.676 "zcopy": false, 00:18:35.676 "get_zone_info": false, 00:18:35.676 "zone_management": false, 00:18:35.676 "zone_append": false, 00:18:35.676 "compare": false, 00:18:35.676 "compare_and_write": false, 00:18:35.676 "abort": false, 00:18:35.676 "seek_hole": false, 00:18:35.676 "seek_data": false, 00:18:35.676 "copy": false, 00:18:35.676 "nvme_iov_md": false 00:18:35.676 }, 00:18:35.676 "driver_specific": { 00:18:35.676 "raid": { 00:18:35.676 "uuid": "2c4b6b1d-39ea-4734-823d-29292741b212", 00:18:35.676 "strip_size_kb": 64, 00:18:35.676 "state": "online", 00:18:35.676 "raid_level": "raid5f", 00:18:35.676 "superblock": true, 00:18:35.676 "num_base_bdevs": 3, 00:18:35.676 "num_base_bdevs_discovered": 3, 00:18:35.676 "num_base_bdevs_operational": 3, 00:18:35.676 "base_bdevs_list": [ 00:18:35.676 { 00:18:35.676 "name": "NewBaseBdev", 00:18:35.676 "uuid": "b523d994-4548-4867-bd17-db5ee1c0fb58", 00:18:35.676 "is_configured": true, 00:18:35.676 "data_offset": 2048, 00:18:35.676 "data_size": 63488 00:18:35.676 }, 00:18:35.676 { 00:18:35.676 "name": "BaseBdev2", 00:18:35.676 "uuid": "45e97a37-75f1-4ea6-a595-891782ba7059", 00:18:35.676 "is_configured": true, 00:18:35.676 "data_offset": 2048, 00:18:35.676 "data_size": 63488 00:18:35.676 }, 00:18:35.676 { 00:18:35.676 "name": "BaseBdev3", 00:18:35.676 "uuid": "022e02a9-2403-45bc-9602-7b1b1c55c801", 00:18:35.676 "is_configured": true, 00:18:35.676 "data_offset": 2048, 00:18:35.676 "data_size": 63488 00:18:35.676 } 00:18:35.676 ] 00:18:35.676 } 00:18:35.676 } 00:18:35.676 }' 00:18:35.676 18:17:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:35.676 18:17:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:18:35.676 BaseBdev2 00:18:35.676 BaseBdev3' 00:18:35.676 18:17:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:35.676 18:17:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:35.676 18:17:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:35.676 18:17:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:18:35.676 18:17:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.676 18:17:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.676 18:17:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:35.676 18:17:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.676 18:17:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:35.676 18:17:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:35.676 18:17:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:35.677 18:17:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:35.677 18:17:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.677 18:17:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:35.677 18:17:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.677 18:17:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.936 18:17:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:35.936 18:17:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:35.936 18:17:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:35.936 18:17:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:35.936 18:17:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:35.936 18:17:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.936 18:17:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.936 18:17:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.936 18:17:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:35.936 18:17:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:35.936 18:17:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:35.936 18:17:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.936 18:17:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.936 [2024-12-06 18:17:01.274341] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:35.936 [2024-12-06 18:17:01.274378] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:35.936 [2024-12-06 18:17:01.274503] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:35.936 [2024-12-06 18:17:01.274882] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:35.936 [2024-12-06 18:17:01.274916] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:18:35.936 18:17:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.936 18:17:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80908 00:18:35.936 18:17:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80908 ']' 00:18:35.936 18:17:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80908 00:18:35.936 18:17:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:18:35.936 18:17:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:35.936 18:17:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80908 00:18:35.936 18:17:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:35.936 18:17:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:35.936 killing process with pid 80908 00:18:35.936 18:17:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80908' 00:18:35.936 18:17:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80908 00:18:35.937 [2024-12-06 18:17:01.317403] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:35.937 18:17:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80908 00:18:36.195 [2024-12-06 18:17:01.594284] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:37.132 18:17:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:18:37.132 00:18:37.132 real 0m12.002s 00:18:37.132 user 0m19.989s 00:18:37.132 sys 0m1.644s 00:18:37.132 18:17:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:37.132 18:17:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:37.132 ************************************ 00:18:37.133 END TEST raid5f_state_function_test_sb 00:18:37.133 ************************************ 00:18:37.391 18:17:02 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:18:37.391 18:17:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:37.391 18:17:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:37.391 18:17:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:37.391 ************************************ 00:18:37.391 START TEST raid5f_superblock_test 00:18:37.391 ************************************ 00:18:37.391 18:17:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:18:37.391 18:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:18:37.391 18:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:18:37.391 18:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:37.391 18:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:37.391 18:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:37.391 18:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:37.391 18:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:37.391 18:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:37.391 18:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:37.391 18:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:37.391 18:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:37.391 18:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:37.392 18:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:37.392 18:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:18:37.392 18:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:18:37.392 18:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:18:37.392 18:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81541 00:18:37.392 18:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81541 00:18:37.392 18:17:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81541 ']' 00:18:37.392 18:17:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.392 18:17:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:37.392 18:17:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.392 18:17:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:37.392 18:17:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.392 18:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:37.392 [2024-12-06 18:17:02.814876] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:18:37.392 [2024-12-06 18:17:02.815055] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81541 ] 00:18:37.650 [2024-12-06 18:17:03.008678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.650 [2024-12-06 18:17:03.138255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.908 [2024-12-06 18:17:03.339984] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:37.908 [2024-12-06 18:17:03.340036] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.474 malloc1 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.474 [2024-12-06 18:17:03.768275] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:38.474 [2024-12-06 18:17:03.768348] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:38.474 [2024-12-06 18:17:03.768383] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:38.474 [2024-12-06 18:17:03.768401] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:38.474 [2024-12-06 18:17:03.771229] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:38.474 [2024-12-06 18:17:03.771276] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:38.474 pt1 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.474 malloc2 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.474 [2024-12-06 18:17:03.824434] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:38.474 [2024-12-06 18:17:03.824508] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:38.474 [2024-12-06 18:17:03.824549] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:38.474 [2024-12-06 18:17:03.824566] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:38.474 [2024-12-06 18:17:03.827388] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:38.474 [2024-12-06 18:17:03.827435] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:38.474 pt2 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.474 malloc3 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.474 [2024-12-06 18:17:03.886444] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:38.474 [2024-12-06 18:17:03.886530] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:38.474 [2024-12-06 18:17:03.886565] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:38.474 [2024-12-06 18:17:03.886583] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:38.474 [2024-12-06 18:17:03.889318] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:38.474 [2024-12-06 18:17:03.889364] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:38.474 pt3 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.474 [2024-12-06 18:17:03.894539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:38.474 [2024-12-06 18:17:03.896995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:38.474 [2024-12-06 18:17:03.897098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:38.474 [2024-12-06 18:17:03.897329] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:38.474 [2024-12-06 18:17:03.897369] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:38.474 [2024-12-06 18:17:03.897699] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:38.474 [2024-12-06 18:17:03.902891] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:38.474 [2024-12-06 18:17:03.902923] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:38.474 [2024-12-06 18:17:03.903191] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.474 18:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:38.475 "name": "raid_bdev1", 00:18:38.475 "uuid": "83b93e6b-a1cd-4dcc-8b93-de1ef5ab1a42", 00:18:38.475 "strip_size_kb": 64, 00:18:38.475 "state": "online", 00:18:38.475 "raid_level": "raid5f", 00:18:38.475 "superblock": true, 00:18:38.475 "num_base_bdevs": 3, 00:18:38.475 "num_base_bdevs_discovered": 3, 00:18:38.475 "num_base_bdevs_operational": 3, 00:18:38.475 "base_bdevs_list": [ 00:18:38.475 { 00:18:38.475 "name": "pt1", 00:18:38.475 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:38.475 "is_configured": true, 00:18:38.475 "data_offset": 2048, 00:18:38.475 "data_size": 63488 00:18:38.475 }, 00:18:38.475 { 00:18:38.475 "name": "pt2", 00:18:38.475 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:38.475 "is_configured": true, 00:18:38.475 "data_offset": 2048, 00:18:38.475 "data_size": 63488 00:18:38.475 }, 00:18:38.475 { 00:18:38.475 "name": "pt3", 00:18:38.475 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:38.475 "is_configured": true, 00:18:38.475 "data_offset": 2048, 00:18:38.475 "data_size": 63488 00:18:38.475 } 00:18:38.475 ] 00:18:38.475 }' 00:18:38.475 18:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:38.475 18:17:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.040 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:39.040 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:39.040 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:39.040 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:39.040 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:39.040 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:39.040 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:39.040 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.040 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.040 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:39.040 [2024-12-06 18:17:04.405184] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:39.040 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.040 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:39.040 "name": "raid_bdev1", 00:18:39.040 "aliases": [ 00:18:39.040 "83b93e6b-a1cd-4dcc-8b93-de1ef5ab1a42" 00:18:39.040 ], 00:18:39.040 "product_name": "Raid Volume", 00:18:39.040 "block_size": 512, 00:18:39.040 "num_blocks": 126976, 00:18:39.040 "uuid": "83b93e6b-a1cd-4dcc-8b93-de1ef5ab1a42", 00:18:39.040 "assigned_rate_limits": { 00:18:39.040 "rw_ios_per_sec": 0, 00:18:39.040 "rw_mbytes_per_sec": 0, 00:18:39.040 "r_mbytes_per_sec": 0, 00:18:39.040 "w_mbytes_per_sec": 0 00:18:39.040 }, 00:18:39.040 "claimed": false, 00:18:39.040 "zoned": false, 00:18:39.040 "supported_io_types": { 00:18:39.040 "read": true, 00:18:39.040 "write": true, 00:18:39.040 "unmap": false, 00:18:39.040 "flush": false, 00:18:39.040 "reset": true, 00:18:39.040 "nvme_admin": false, 00:18:39.040 "nvme_io": false, 00:18:39.040 "nvme_io_md": false, 00:18:39.040 "write_zeroes": true, 00:18:39.040 "zcopy": false, 00:18:39.040 "get_zone_info": false, 00:18:39.040 "zone_management": false, 00:18:39.040 "zone_append": false, 00:18:39.040 "compare": false, 00:18:39.040 "compare_and_write": false, 00:18:39.040 "abort": false, 00:18:39.040 "seek_hole": false, 00:18:39.040 "seek_data": false, 00:18:39.040 "copy": false, 00:18:39.040 "nvme_iov_md": false 00:18:39.040 }, 00:18:39.040 "driver_specific": { 00:18:39.040 "raid": { 00:18:39.040 "uuid": "83b93e6b-a1cd-4dcc-8b93-de1ef5ab1a42", 00:18:39.040 "strip_size_kb": 64, 00:18:39.040 "state": "online", 00:18:39.040 "raid_level": "raid5f", 00:18:39.040 "superblock": true, 00:18:39.040 "num_base_bdevs": 3, 00:18:39.040 "num_base_bdevs_discovered": 3, 00:18:39.040 "num_base_bdevs_operational": 3, 00:18:39.040 "base_bdevs_list": [ 00:18:39.040 { 00:18:39.040 "name": "pt1", 00:18:39.040 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:39.040 "is_configured": true, 00:18:39.040 "data_offset": 2048, 00:18:39.040 "data_size": 63488 00:18:39.040 }, 00:18:39.040 { 00:18:39.040 "name": "pt2", 00:18:39.040 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:39.040 "is_configured": true, 00:18:39.040 "data_offset": 2048, 00:18:39.040 "data_size": 63488 00:18:39.040 }, 00:18:39.040 { 00:18:39.040 "name": "pt3", 00:18:39.040 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:39.040 "is_configured": true, 00:18:39.040 "data_offset": 2048, 00:18:39.040 "data_size": 63488 00:18:39.040 } 00:18:39.040 ] 00:18:39.040 } 00:18:39.040 } 00:18:39.040 }' 00:18:39.040 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:39.040 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:39.040 pt2 00:18:39.040 pt3' 00:18:39.040 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:39.299 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:39.299 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:39.299 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:39.299 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:39.299 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.299 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.299 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.299 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:39.299 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:39.299 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:39.299 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:39.299 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.299 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.299 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:39.299 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.299 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:39.299 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:39.299 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:39.299 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:39.299 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.299 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.299 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:39.299 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.299 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:39.299 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:39.299 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:39.299 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:39.299 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.299 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.299 [2024-12-06 18:17:04.721227] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:39.299 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.299 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=83b93e6b-a1cd-4dcc-8b93-de1ef5ab1a42 00:18:39.299 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 83b93e6b-a1cd-4dcc-8b93-de1ef5ab1a42 ']' 00:18:39.299 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:39.299 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.299 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.299 [2024-12-06 18:17:04.765009] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:39.299 [2024-12-06 18:17:04.765052] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:39.299 [2024-12-06 18:17:04.765159] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:39.299 [2024-12-06 18:17:04.765260] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:39.299 [2024-12-06 18:17:04.765278] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:39.299 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.299 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:39.299 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.299 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.299 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.299 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.558 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:39.558 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:39.558 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:39.558 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:39.558 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.558 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.558 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.558 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:39.558 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:39.558 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.558 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.558 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.558 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:39.558 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:18:39.558 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.558 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.558 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.558 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:39.558 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.558 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.558 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:39.558 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.558 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:39.558 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:18:39.558 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:18:39.558 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:18:39.558 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:39.558 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:39.559 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:39.559 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:39.559 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:18:39.559 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.559 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.559 [2024-12-06 18:17:04.905116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:39.559 [2024-12-06 18:17:04.907585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:39.559 [2024-12-06 18:17:04.907669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:39.559 [2024-12-06 18:17:04.907746] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:39.559 [2024-12-06 18:17:04.907834] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:39.559 [2024-12-06 18:17:04.907872] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:18:39.559 [2024-12-06 18:17:04.907901] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:39.559 [2024-12-06 18:17:04.907915] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:39.559 request: 00:18:39.559 { 00:18:39.559 "name": "raid_bdev1", 00:18:39.559 "raid_level": "raid5f", 00:18:39.559 "base_bdevs": [ 00:18:39.559 "malloc1", 00:18:39.559 "malloc2", 00:18:39.559 "malloc3" 00:18:39.559 ], 00:18:39.559 "strip_size_kb": 64, 00:18:39.559 "superblock": false, 00:18:39.559 "method": "bdev_raid_create", 00:18:39.559 "req_id": 1 00:18:39.559 } 00:18:39.559 Got JSON-RPC error response 00:18:39.559 response: 00:18:39.559 { 00:18:39.559 "code": -17, 00:18:39.559 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:39.559 } 00:18:39.559 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:39.559 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:18:39.559 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:39.559 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:39.559 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:39.559 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:39.559 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.559 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.559 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.559 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.559 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:39.559 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:39.559 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:39.559 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.559 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.559 [2024-12-06 18:17:04.973073] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:39.559 [2024-12-06 18:17:04.973153] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:39.559 [2024-12-06 18:17:04.973187] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:39.559 [2024-12-06 18:17:04.973203] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:39.559 [2024-12-06 18:17:04.976082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:39.559 [2024-12-06 18:17:04.976129] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:39.559 [2024-12-06 18:17:04.976240] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:39.559 [2024-12-06 18:17:04.976308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:39.559 pt1 00:18:39.559 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.559 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:39.559 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:39.559 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:39.559 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:39.559 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:39.559 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:39.559 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.559 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.559 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.559 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.559 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.559 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.559 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.559 18:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.559 18:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.559 18:17:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.559 "name": "raid_bdev1", 00:18:39.559 "uuid": "83b93e6b-a1cd-4dcc-8b93-de1ef5ab1a42", 00:18:39.559 "strip_size_kb": 64, 00:18:39.559 "state": "configuring", 00:18:39.559 "raid_level": "raid5f", 00:18:39.559 "superblock": true, 00:18:39.559 "num_base_bdevs": 3, 00:18:39.559 "num_base_bdevs_discovered": 1, 00:18:39.559 "num_base_bdevs_operational": 3, 00:18:39.559 "base_bdevs_list": [ 00:18:39.559 { 00:18:39.559 "name": "pt1", 00:18:39.559 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:39.559 "is_configured": true, 00:18:39.559 "data_offset": 2048, 00:18:39.559 "data_size": 63488 00:18:39.559 }, 00:18:39.559 { 00:18:39.559 "name": null, 00:18:39.559 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:39.559 "is_configured": false, 00:18:39.559 "data_offset": 2048, 00:18:39.559 "data_size": 63488 00:18:39.559 }, 00:18:39.559 { 00:18:39.559 "name": null, 00:18:39.559 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:39.559 "is_configured": false, 00:18:39.559 "data_offset": 2048, 00:18:39.559 "data_size": 63488 00:18:39.559 } 00:18:39.559 ] 00:18:39.559 }' 00:18:39.559 18:17:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.559 18:17:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.125 18:17:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:18:40.125 18:17:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:40.125 18:17:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.125 18:17:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.125 [2024-12-06 18:17:05.513538] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:40.125 [2024-12-06 18:17:05.513615] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:40.125 [2024-12-06 18:17:05.513650] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:18:40.125 [2024-12-06 18:17:05.513667] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:40.125 [2024-12-06 18:17:05.514245] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:40.125 [2024-12-06 18:17:05.514295] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:40.125 [2024-12-06 18:17:05.514406] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:40.125 [2024-12-06 18:17:05.514447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:40.125 pt2 00:18:40.125 18:17:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.125 18:17:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:18:40.125 18:17:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.125 18:17:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.125 [2024-12-06 18:17:05.521525] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:40.125 18:17:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.125 18:17:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:40.125 18:17:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:40.125 18:17:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:40.125 18:17:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:40.125 18:17:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:40.125 18:17:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:40.125 18:17:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.125 18:17:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.125 18:17:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.125 18:17:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.125 18:17:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.125 18:17:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.125 18:17:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.125 18:17:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.125 18:17:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.125 18:17:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.125 "name": "raid_bdev1", 00:18:40.125 "uuid": "83b93e6b-a1cd-4dcc-8b93-de1ef5ab1a42", 00:18:40.126 "strip_size_kb": 64, 00:18:40.126 "state": "configuring", 00:18:40.126 "raid_level": "raid5f", 00:18:40.126 "superblock": true, 00:18:40.126 "num_base_bdevs": 3, 00:18:40.126 "num_base_bdevs_discovered": 1, 00:18:40.126 "num_base_bdevs_operational": 3, 00:18:40.126 "base_bdevs_list": [ 00:18:40.126 { 00:18:40.126 "name": "pt1", 00:18:40.126 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:40.126 "is_configured": true, 00:18:40.126 "data_offset": 2048, 00:18:40.126 "data_size": 63488 00:18:40.126 }, 00:18:40.126 { 00:18:40.126 "name": null, 00:18:40.126 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:40.126 "is_configured": false, 00:18:40.126 "data_offset": 0, 00:18:40.126 "data_size": 63488 00:18:40.126 }, 00:18:40.126 { 00:18:40.126 "name": null, 00:18:40.126 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:40.126 "is_configured": false, 00:18:40.126 "data_offset": 2048, 00:18:40.126 "data_size": 63488 00:18:40.126 } 00:18:40.126 ] 00:18:40.126 }' 00:18:40.126 18:17:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.126 18:17:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.693 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:40.693 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:40.693 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:40.693 18:17:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.693 18:17:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.693 [2024-12-06 18:17:06.085633] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:40.693 [2024-12-06 18:17:06.085719] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:40.693 [2024-12-06 18:17:06.085749] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:40.693 [2024-12-06 18:17:06.085787] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:40.693 [2024-12-06 18:17:06.086386] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:40.693 [2024-12-06 18:17:06.086428] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:40.693 [2024-12-06 18:17:06.086541] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:40.693 [2024-12-06 18:17:06.086579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:40.693 pt2 00:18:40.693 18:17:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.693 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:40.693 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:40.693 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:40.693 18:17:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.693 18:17:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.693 [2024-12-06 18:17:06.093614] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:40.693 [2024-12-06 18:17:06.093670] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:40.693 [2024-12-06 18:17:06.093693] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:40.693 [2024-12-06 18:17:06.093711] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:40.693 [2024-12-06 18:17:06.094168] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:40.693 [2024-12-06 18:17:06.094219] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:40.693 [2024-12-06 18:17:06.094297] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:40.693 [2024-12-06 18:17:06.094330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:40.693 [2024-12-06 18:17:06.094520] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:40.693 [2024-12-06 18:17:06.094561] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:40.693 [2024-12-06 18:17:06.094895] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:40.693 [2024-12-06 18:17:06.099839] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:40.693 [2024-12-06 18:17:06.099871] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:40.693 [2024-12-06 18:17:06.100094] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:40.693 pt3 00:18:40.693 18:17:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.693 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:40.693 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:40.693 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:40.693 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:40.693 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:40.693 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:40.693 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:40.693 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:40.693 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.693 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.693 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.693 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.693 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.693 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.693 18:17:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.693 18:17:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.693 18:17:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.693 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.693 "name": "raid_bdev1", 00:18:40.693 "uuid": "83b93e6b-a1cd-4dcc-8b93-de1ef5ab1a42", 00:18:40.693 "strip_size_kb": 64, 00:18:40.693 "state": "online", 00:18:40.693 "raid_level": "raid5f", 00:18:40.693 "superblock": true, 00:18:40.693 "num_base_bdevs": 3, 00:18:40.693 "num_base_bdevs_discovered": 3, 00:18:40.693 "num_base_bdevs_operational": 3, 00:18:40.693 "base_bdevs_list": [ 00:18:40.693 { 00:18:40.693 "name": "pt1", 00:18:40.693 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:40.693 "is_configured": true, 00:18:40.693 "data_offset": 2048, 00:18:40.693 "data_size": 63488 00:18:40.693 }, 00:18:40.693 { 00:18:40.693 "name": "pt2", 00:18:40.693 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:40.693 "is_configured": true, 00:18:40.693 "data_offset": 2048, 00:18:40.693 "data_size": 63488 00:18:40.693 }, 00:18:40.693 { 00:18:40.693 "name": "pt3", 00:18:40.693 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:40.693 "is_configured": true, 00:18:40.693 "data_offset": 2048, 00:18:40.693 "data_size": 63488 00:18:40.693 } 00:18:40.693 ] 00:18:40.693 }' 00:18:40.693 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.693 18:17:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.260 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:41.260 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:41.260 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:41.260 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:41.260 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:41.260 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:41.260 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:41.260 18:17:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.260 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:41.260 18:17:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.260 [2024-12-06 18:17:06.630151] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:41.260 18:17:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.260 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:41.260 "name": "raid_bdev1", 00:18:41.260 "aliases": [ 00:18:41.260 "83b93e6b-a1cd-4dcc-8b93-de1ef5ab1a42" 00:18:41.260 ], 00:18:41.260 "product_name": "Raid Volume", 00:18:41.260 "block_size": 512, 00:18:41.260 "num_blocks": 126976, 00:18:41.260 "uuid": "83b93e6b-a1cd-4dcc-8b93-de1ef5ab1a42", 00:18:41.260 "assigned_rate_limits": { 00:18:41.260 "rw_ios_per_sec": 0, 00:18:41.260 "rw_mbytes_per_sec": 0, 00:18:41.260 "r_mbytes_per_sec": 0, 00:18:41.260 "w_mbytes_per_sec": 0 00:18:41.260 }, 00:18:41.260 "claimed": false, 00:18:41.260 "zoned": false, 00:18:41.260 "supported_io_types": { 00:18:41.260 "read": true, 00:18:41.260 "write": true, 00:18:41.260 "unmap": false, 00:18:41.260 "flush": false, 00:18:41.260 "reset": true, 00:18:41.260 "nvme_admin": false, 00:18:41.260 "nvme_io": false, 00:18:41.260 "nvme_io_md": false, 00:18:41.260 "write_zeroes": true, 00:18:41.260 "zcopy": false, 00:18:41.260 "get_zone_info": false, 00:18:41.260 "zone_management": false, 00:18:41.260 "zone_append": false, 00:18:41.260 "compare": false, 00:18:41.260 "compare_and_write": false, 00:18:41.261 "abort": false, 00:18:41.261 "seek_hole": false, 00:18:41.261 "seek_data": false, 00:18:41.261 "copy": false, 00:18:41.261 "nvme_iov_md": false 00:18:41.261 }, 00:18:41.261 "driver_specific": { 00:18:41.261 "raid": { 00:18:41.261 "uuid": "83b93e6b-a1cd-4dcc-8b93-de1ef5ab1a42", 00:18:41.261 "strip_size_kb": 64, 00:18:41.261 "state": "online", 00:18:41.261 "raid_level": "raid5f", 00:18:41.261 "superblock": true, 00:18:41.261 "num_base_bdevs": 3, 00:18:41.261 "num_base_bdevs_discovered": 3, 00:18:41.261 "num_base_bdevs_operational": 3, 00:18:41.261 "base_bdevs_list": [ 00:18:41.261 { 00:18:41.261 "name": "pt1", 00:18:41.261 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:41.261 "is_configured": true, 00:18:41.261 "data_offset": 2048, 00:18:41.261 "data_size": 63488 00:18:41.261 }, 00:18:41.261 { 00:18:41.261 "name": "pt2", 00:18:41.261 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:41.261 "is_configured": true, 00:18:41.261 "data_offset": 2048, 00:18:41.261 "data_size": 63488 00:18:41.261 }, 00:18:41.261 { 00:18:41.261 "name": "pt3", 00:18:41.261 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:41.261 "is_configured": true, 00:18:41.261 "data_offset": 2048, 00:18:41.261 "data_size": 63488 00:18:41.261 } 00:18:41.261 ] 00:18:41.261 } 00:18:41.261 } 00:18:41.261 }' 00:18:41.261 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:41.261 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:41.261 pt2 00:18:41.261 pt3' 00:18:41.261 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:41.519 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:41.519 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:41.519 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:41.519 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:41.519 18:17:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.519 18:17:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.519 18:17:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.519 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:41.520 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:41.520 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:41.520 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:41.520 18:17:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.520 18:17:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.520 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:41.520 18:17:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.520 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:41.520 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:41.520 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:41.520 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:41.520 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:41.520 18:17:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.520 18:17:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.520 18:17:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.520 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:41.520 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:41.520 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:41.520 18:17:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.520 18:17:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.520 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:41.520 [2024-12-06 18:17:06.950200] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:41.520 18:17:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.520 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 83b93e6b-a1cd-4dcc-8b93-de1ef5ab1a42 '!=' 83b93e6b-a1cd-4dcc-8b93-de1ef5ab1a42 ']' 00:18:41.520 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:18:41.520 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:41.520 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:41.520 18:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:41.520 18:17:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.520 18:17:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.520 [2024-12-06 18:17:07.002061] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:41.520 18:17:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.520 18:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:41.520 18:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:41.520 18:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:41.520 18:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:41.520 18:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:41.520 18:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:41.520 18:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.520 18:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.520 18:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.520 18:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.520 18:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.520 18:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.520 18:17:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.520 18:17:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.520 18:17:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.778 18:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.778 "name": "raid_bdev1", 00:18:41.778 "uuid": "83b93e6b-a1cd-4dcc-8b93-de1ef5ab1a42", 00:18:41.778 "strip_size_kb": 64, 00:18:41.778 "state": "online", 00:18:41.778 "raid_level": "raid5f", 00:18:41.778 "superblock": true, 00:18:41.778 "num_base_bdevs": 3, 00:18:41.778 "num_base_bdevs_discovered": 2, 00:18:41.778 "num_base_bdevs_operational": 2, 00:18:41.778 "base_bdevs_list": [ 00:18:41.778 { 00:18:41.778 "name": null, 00:18:41.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.778 "is_configured": false, 00:18:41.778 "data_offset": 0, 00:18:41.778 "data_size": 63488 00:18:41.778 }, 00:18:41.778 { 00:18:41.778 "name": "pt2", 00:18:41.778 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:41.778 "is_configured": true, 00:18:41.778 "data_offset": 2048, 00:18:41.778 "data_size": 63488 00:18:41.778 }, 00:18:41.778 { 00:18:41.778 "name": "pt3", 00:18:41.778 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:41.778 "is_configured": true, 00:18:41.778 "data_offset": 2048, 00:18:41.778 "data_size": 63488 00:18:41.778 } 00:18:41.778 ] 00:18:41.778 }' 00:18:41.778 18:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:41.778 18:17:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.035 18:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:42.035 18:17:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.035 18:17:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.035 [2024-12-06 18:17:07.546110] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:42.035 [2024-12-06 18:17:07.546162] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:42.035 [2024-12-06 18:17:07.546261] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:42.035 [2024-12-06 18:17:07.546343] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:42.035 [2024-12-06 18:17:07.546367] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:42.035 18:17:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.293 18:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.293 18:17:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.293 18:17:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.293 18:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:42.293 18:17:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.293 18:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:42.293 18:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:42.293 18:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:42.293 18:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:42.293 18:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:42.293 18:17:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.293 18:17:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.293 18:17:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.293 18:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:42.293 18:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:42.293 18:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:18:42.293 18:17:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.293 18:17:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.293 18:17:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.293 18:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:42.293 18:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:42.293 18:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:42.293 18:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:42.293 18:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:42.293 18:17:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.293 18:17:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.293 [2024-12-06 18:17:07.618078] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:42.293 [2024-12-06 18:17:07.618152] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:42.293 [2024-12-06 18:17:07.618188] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:18:42.293 [2024-12-06 18:17:07.618207] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:42.293 [2024-12-06 18:17:07.621096] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:42.293 [2024-12-06 18:17:07.621148] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:42.293 [2024-12-06 18:17:07.621254] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:42.293 [2024-12-06 18:17:07.621320] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:42.293 pt2 00:18:42.293 18:17:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.293 18:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:18:42.293 18:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:42.293 18:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:42.293 18:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:42.293 18:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:42.293 18:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:42.293 18:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:42.293 18:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:42.293 18:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:42.293 18:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:42.293 18:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.293 18:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.293 18:17:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.293 18:17:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.293 18:17:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.293 18:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:42.293 "name": "raid_bdev1", 00:18:42.293 "uuid": "83b93e6b-a1cd-4dcc-8b93-de1ef5ab1a42", 00:18:42.293 "strip_size_kb": 64, 00:18:42.293 "state": "configuring", 00:18:42.293 "raid_level": "raid5f", 00:18:42.294 "superblock": true, 00:18:42.294 "num_base_bdevs": 3, 00:18:42.294 "num_base_bdevs_discovered": 1, 00:18:42.294 "num_base_bdevs_operational": 2, 00:18:42.294 "base_bdevs_list": [ 00:18:42.294 { 00:18:42.294 "name": null, 00:18:42.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.294 "is_configured": false, 00:18:42.294 "data_offset": 2048, 00:18:42.294 "data_size": 63488 00:18:42.294 }, 00:18:42.294 { 00:18:42.294 "name": "pt2", 00:18:42.294 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:42.294 "is_configured": true, 00:18:42.294 "data_offset": 2048, 00:18:42.294 "data_size": 63488 00:18:42.294 }, 00:18:42.294 { 00:18:42.294 "name": null, 00:18:42.294 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:42.294 "is_configured": false, 00:18:42.294 "data_offset": 2048, 00:18:42.294 "data_size": 63488 00:18:42.294 } 00:18:42.294 ] 00:18:42.294 }' 00:18:42.294 18:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:42.294 18:17:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.860 18:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:18:42.860 18:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:42.860 18:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:18:42.860 18:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:42.860 18:17:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.860 18:17:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.860 [2024-12-06 18:17:08.166230] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:42.860 [2024-12-06 18:17:08.166321] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:42.860 [2024-12-06 18:17:08.166355] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:42.860 [2024-12-06 18:17:08.166375] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:42.860 [2024-12-06 18:17:08.167007] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:42.860 [2024-12-06 18:17:08.167055] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:42.860 [2024-12-06 18:17:08.167157] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:42.860 [2024-12-06 18:17:08.167198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:42.860 [2024-12-06 18:17:08.167343] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:42.860 [2024-12-06 18:17:08.167374] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:42.860 [2024-12-06 18:17:08.167704] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:42.860 [2024-12-06 18:17:08.172589] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:42.860 [2024-12-06 18:17:08.172620] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:42.860 [2024-12-06 18:17:08.173020] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:42.860 pt3 00:18:42.860 18:17:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.860 18:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:42.860 18:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:42.860 18:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:42.860 18:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:42.860 18:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:42.861 18:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:42.861 18:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:42.861 18:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:42.861 18:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:42.861 18:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:42.861 18:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.861 18:17:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.861 18:17:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.861 18:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.861 18:17:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.861 18:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:42.861 "name": "raid_bdev1", 00:18:42.861 "uuid": "83b93e6b-a1cd-4dcc-8b93-de1ef5ab1a42", 00:18:42.861 "strip_size_kb": 64, 00:18:42.861 "state": "online", 00:18:42.861 "raid_level": "raid5f", 00:18:42.861 "superblock": true, 00:18:42.861 "num_base_bdevs": 3, 00:18:42.861 "num_base_bdevs_discovered": 2, 00:18:42.861 "num_base_bdevs_operational": 2, 00:18:42.861 "base_bdevs_list": [ 00:18:42.861 { 00:18:42.861 "name": null, 00:18:42.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.861 "is_configured": false, 00:18:42.861 "data_offset": 2048, 00:18:42.861 "data_size": 63488 00:18:42.861 }, 00:18:42.861 { 00:18:42.861 "name": "pt2", 00:18:42.861 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:42.861 "is_configured": true, 00:18:42.861 "data_offset": 2048, 00:18:42.861 "data_size": 63488 00:18:42.861 }, 00:18:42.861 { 00:18:42.861 "name": "pt3", 00:18:42.861 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:42.861 "is_configured": true, 00:18:42.861 "data_offset": 2048, 00:18:42.861 "data_size": 63488 00:18:42.861 } 00:18:42.861 ] 00:18:42.861 }' 00:18:42.861 18:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:42.861 18:17:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.426 18:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:43.426 18:17:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.426 18:17:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.426 [2024-12-06 18:17:08.686675] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:43.426 [2024-12-06 18:17:08.686718] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:43.426 [2024-12-06 18:17:08.686841] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:43.426 [2024-12-06 18:17:08.686933] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:43.426 [2024-12-06 18:17:08.686958] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:43.426 18:17:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.426 18:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.426 18:17:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.426 18:17:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.426 18:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:43.426 18:17:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.426 18:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:43.426 18:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:43.426 18:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:18:43.426 18:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:18:43.426 18:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:18:43.426 18:17:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.426 18:17:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.426 18:17:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.426 18:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:43.426 18:17:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.426 18:17:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.426 [2024-12-06 18:17:08.758709] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:43.427 [2024-12-06 18:17:08.758796] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:43.427 [2024-12-06 18:17:08.758829] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:43.427 [2024-12-06 18:17:08.758845] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:43.427 [2024-12-06 18:17:08.761733] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:43.427 [2024-12-06 18:17:08.761792] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:43.427 [2024-12-06 18:17:08.761905] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:43.427 [2024-12-06 18:17:08.761969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:43.427 [2024-12-06 18:17:08.762148] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:43.427 [2024-12-06 18:17:08.762178] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:43.427 [2024-12-06 18:17:08.762203] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:43.427 [2024-12-06 18:17:08.762269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:43.427 pt1 00:18:43.427 18:17:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.427 18:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:18:43.427 18:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:18:43.427 18:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:43.427 18:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:43.427 18:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:43.427 18:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:43.427 18:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:43.427 18:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:43.427 18:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:43.427 18:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:43.427 18:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:43.427 18:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.427 18:17:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.427 18:17:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.427 18:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.427 18:17:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.427 18:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:43.427 "name": "raid_bdev1", 00:18:43.427 "uuid": "83b93e6b-a1cd-4dcc-8b93-de1ef5ab1a42", 00:18:43.427 "strip_size_kb": 64, 00:18:43.427 "state": "configuring", 00:18:43.427 "raid_level": "raid5f", 00:18:43.427 "superblock": true, 00:18:43.427 "num_base_bdevs": 3, 00:18:43.427 "num_base_bdevs_discovered": 1, 00:18:43.427 "num_base_bdevs_operational": 2, 00:18:43.427 "base_bdevs_list": [ 00:18:43.427 { 00:18:43.427 "name": null, 00:18:43.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.427 "is_configured": false, 00:18:43.427 "data_offset": 2048, 00:18:43.427 "data_size": 63488 00:18:43.427 }, 00:18:43.427 { 00:18:43.427 "name": "pt2", 00:18:43.427 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:43.427 "is_configured": true, 00:18:43.427 "data_offset": 2048, 00:18:43.427 "data_size": 63488 00:18:43.427 }, 00:18:43.427 { 00:18:43.427 "name": null, 00:18:43.427 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:43.427 "is_configured": false, 00:18:43.427 "data_offset": 2048, 00:18:43.427 "data_size": 63488 00:18:43.427 } 00:18:43.427 ] 00:18:43.427 }' 00:18:43.427 18:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:43.427 18:17:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.035 18:17:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:18:44.035 18:17:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.035 18:17:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.035 18:17:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:44.035 18:17:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.035 18:17:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:18:44.035 18:17:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:44.035 18:17:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.035 18:17:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.035 [2024-12-06 18:17:09.290875] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:44.035 [2024-12-06 18:17:09.290967] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:44.035 [2024-12-06 18:17:09.291001] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:18:44.035 [2024-12-06 18:17:09.291016] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:44.035 [2024-12-06 18:17:09.291642] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:44.035 [2024-12-06 18:17:09.291681] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:44.035 [2024-12-06 18:17:09.291823] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:44.035 [2024-12-06 18:17:09.291858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:44.035 [2024-12-06 18:17:09.292014] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:44.035 [2024-12-06 18:17:09.292040] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:44.035 [2024-12-06 18:17:09.292354] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:44.035 [2024-12-06 18:17:09.297314] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:44.035 [2024-12-06 18:17:09.297355] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:44.035 [2024-12-06 18:17:09.297676] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:44.035 pt3 00:18:44.035 18:17:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.035 18:17:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:44.035 18:17:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:44.035 18:17:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:44.035 18:17:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:44.035 18:17:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:44.036 18:17:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:44.036 18:17:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.036 18:17:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.036 18:17:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.036 18:17:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.036 18:17:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.036 18:17:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.036 18:17:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.036 18:17:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.036 18:17:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.036 18:17:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:44.036 "name": "raid_bdev1", 00:18:44.036 "uuid": "83b93e6b-a1cd-4dcc-8b93-de1ef5ab1a42", 00:18:44.036 "strip_size_kb": 64, 00:18:44.036 "state": "online", 00:18:44.036 "raid_level": "raid5f", 00:18:44.036 "superblock": true, 00:18:44.036 "num_base_bdevs": 3, 00:18:44.036 "num_base_bdevs_discovered": 2, 00:18:44.036 "num_base_bdevs_operational": 2, 00:18:44.036 "base_bdevs_list": [ 00:18:44.036 { 00:18:44.036 "name": null, 00:18:44.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.036 "is_configured": false, 00:18:44.036 "data_offset": 2048, 00:18:44.036 "data_size": 63488 00:18:44.036 }, 00:18:44.036 { 00:18:44.036 "name": "pt2", 00:18:44.036 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:44.036 "is_configured": true, 00:18:44.036 "data_offset": 2048, 00:18:44.036 "data_size": 63488 00:18:44.036 }, 00:18:44.036 { 00:18:44.036 "name": "pt3", 00:18:44.036 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:44.036 "is_configured": true, 00:18:44.036 "data_offset": 2048, 00:18:44.036 "data_size": 63488 00:18:44.036 } 00:18:44.036 ] 00:18:44.036 }' 00:18:44.036 18:17:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:44.036 18:17:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.294 18:17:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:44.294 18:17:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:44.294 18:17:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.294 18:17:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.552 18:17:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.552 18:17:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:44.552 18:17:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:44.552 18:17:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:44.552 18:17:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.552 18:17:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.552 [2024-12-06 18:17:09.859730] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:44.552 18:17:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.552 18:17:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 83b93e6b-a1cd-4dcc-8b93-de1ef5ab1a42 '!=' 83b93e6b-a1cd-4dcc-8b93-de1ef5ab1a42 ']' 00:18:44.552 18:17:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81541 00:18:44.552 18:17:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81541 ']' 00:18:44.552 18:17:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81541 00:18:44.552 18:17:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:18:44.552 18:17:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:44.552 18:17:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81541 00:18:44.552 18:17:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:44.552 18:17:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:44.552 18:17:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81541' 00:18:44.552 killing process with pid 81541 00:18:44.552 18:17:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81541 00:18:44.552 [2024-12-06 18:17:09.940412] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:44.552 18:17:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81541 00:18:44.552 [2024-12-06 18:17:09.940553] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:44.552 [2024-12-06 18:17:09.940635] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:44.552 [2024-12-06 18:17:09.940667] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:44.810 [2024-12-06 18:17:10.216670] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:46.184 18:17:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:18:46.184 00:18:46.184 real 0m8.591s 00:18:46.184 user 0m13.978s 00:18:46.184 sys 0m1.267s 00:18:46.184 18:17:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:46.184 18:17:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.184 ************************************ 00:18:46.184 END TEST raid5f_superblock_test 00:18:46.184 ************************************ 00:18:46.184 18:17:11 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:18:46.184 18:17:11 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:18:46.184 18:17:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:46.184 18:17:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:46.184 18:17:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:46.184 ************************************ 00:18:46.184 START TEST raid5f_rebuild_test 00:18:46.184 ************************************ 00:18:46.184 18:17:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:18:46.184 18:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:18:46.184 18:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:18:46.184 18:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:18:46.184 18:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:46.184 18:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:46.184 18:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:46.184 18:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:46.184 18:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:46.184 18:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:46.184 18:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:46.184 18:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:46.184 18:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:46.184 18:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:46.184 18:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:46.184 18:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:46.184 18:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:46.184 18:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:46.184 18:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:46.184 18:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:46.184 18:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:46.184 18:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:46.184 18:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:46.184 18:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:46.184 18:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:18:46.184 18:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:18:46.184 18:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:18:46.184 18:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:18:46.184 18:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:18:46.184 18:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81987 00:18:46.184 18:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81987 00:18:46.184 18:17:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81987 ']' 00:18:46.184 18:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:46.184 18:17:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:46.184 18:17:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:46.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:46.184 18:17:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:46.185 18:17:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:46.185 18:17:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.185 [2024-12-06 18:17:11.456787] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:18:46.185 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:46.185 Zero copy mechanism will not be used. 00:18:46.185 [2024-12-06 18:17:11.456981] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81987 ] 00:18:46.185 [2024-12-06 18:17:11.646418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.443 [2024-12-06 18:17:11.802883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.702 [2024-12-06 18:17:12.009960] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:46.702 [2024-12-06 18:17:12.010040] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:46.962 18:17:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:47.221 18:17:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:18:47.221 18:17:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:47.221 18:17:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:47.221 18:17:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.221 18:17:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.221 BaseBdev1_malloc 00:18:47.221 18:17:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.221 18:17:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:47.221 18:17:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.221 18:17:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.221 [2024-12-06 18:17:12.527083] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:47.221 [2024-12-06 18:17:12.527161] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:47.221 [2024-12-06 18:17:12.527194] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:47.221 [2024-12-06 18:17:12.527214] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:47.221 [2024-12-06 18:17:12.530030] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:47.221 [2024-12-06 18:17:12.530083] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:47.221 BaseBdev1 00:18:47.221 18:17:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.221 18:17:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:47.221 18:17:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:47.221 18:17:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.221 18:17:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.221 BaseBdev2_malloc 00:18:47.221 18:17:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.221 18:17:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:47.221 18:17:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.221 18:17:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.221 [2024-12-06 18:17:12.575685] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:47.221 [2024-12-06 18:17:12.575780] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:47.221 [2024-12-06 18:17:12.575817] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:47.221 [2024-12-06 18:17:12.575838] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:47.221 [2024-12-06 18:17:12.578672] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:47.221 [2024-12-06 18:17:12.578726] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:47.221 BaseBdev2 00:18:47.221 18:17:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.221 18:17:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:47.221 18:17:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:47.221 18:17:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.221 18:17:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.221 BaseBdev3_malloc 00:18:47.221 18:17:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.221 18:17:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:47.221 18:17:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.221 18:17:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.221 [2024-12-06 18:17:12.631353] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:47.221 [2024-12-06 18:17:12.631430] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:47.221 [2024-12-06 18:17:12.631463] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:47.221 [2024-12-06 18:17:12.631483] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:47.221 [2024-12-06 18:17:12.634299] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:47.221 [2024-12-06 18:17:12.634349] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:47.221 BaseBdev3 00:18:47.221 18:17:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.221 18:17:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:47.221 18:17:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.221 18:17:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.221 spare_malloc 00:18:47.221 18:17:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.221 18:17:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:47.221 18:17:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.221 18:17:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.222 spare_delay 00:18:47.222 18:17:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.222 18:17:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:47.222 18:17:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.222 18:17:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.222 [2024-12-06 18:17:12.687697] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:47.222 [2024-12-06 18:17:12.687786] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:47.222 [2024-12-06 18:17:12.687818] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:18:47.222 [2024-12-06 18:17:12.687837] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:47.222 [2024-12-06 18:17:12.690673] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:47.222 [2024-12-06 18:17:12.690729] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:47.222 spare 00:18:47.222 18:17:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.222 18:17:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:18:47.222 18:17:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.222 18:17:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.222 [2024-12-06 18:17:12.695794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:47.222 [2024-12-06 18:17:12.698173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:47.222 [2024-12-06 18:17:12.698275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:47.222 [2024-12-06 18:17:12.698402] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:47.222 [2024-12-06 18:17:12.698437] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:18:47.222 [2024-12-06 18:17:12.698826] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:47.222 [2024-12-06 18:17:12.703967] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:47.222 [2024-12-06 18:17:12.704005] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:47.222 [2024-12-06 18:17:12.704269] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:47.222 18:17:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.222 18:17:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:47.222 18:17:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:47.222 18:17:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:47.222 18:17:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:47.222 18:17:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:47.222 18:17:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:47.222 18:17:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.222 18:17:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.222 18:17:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.222 18:17:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.222 18:17:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.222 18:17:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.222 18:17:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.222 18:17:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.222 18:17:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.481 18:17:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.481 "name": "raid_bdev1", 00:18:47.481 "uuid": "836a1ba5-e7f7-4868-b9ff-e67a7812e562", 00:18:47.481 "strip_size_kb": 64, 00:18:47.481 "state": "online", 00:18:47.481 "raid_level": "raid5f", 00:18:47.481 "superblock": false, 00:18:47.481 "num_base_bdevs": 3, 00:18:47.481 "num_base_bdevs_discovered": 3, 00:18:47.481 "num_base_bdevs_operational": 3, 00:18:47.481 "base_bdevs_list": [ 00:18:47.481 { 00:18:47.481 "name": "BaseBdev1", 00:18:47.481 "uuid": "de4dfe4e-0c6e-551b-8cf3-cf2ccdfc8f86", 00:18:47.481 "is_configured": true, 00:18:47.481 "data_offset": 0, 00:18:47.481 "data_size": 65536 00:18:47.481 }, 00:18:47.481 { 00:18:47.481 "name": "BaseBdev2", 00:18:47.481 "uuid": "2d8f2934-b48f-5a15-8459-4fb7f3db0cb1", 00:18:47.481 "is_configured": true, 00:18:47.481 "data_offset": 0, 00:18:47.481 "data_size": 65536 00:18:47.481 }, 00:18:47.481 { 00:18:47.481 "name": "BaseBdev3", 00:18:47.481 "uuid": "90de8492-e857-53b0-81e5-9ab433ef8f4a", 00:18:47.481 "is_configured": true, 00:18:47.481 "data_offset": 0, 00:18:47.481 "data_size": 65536 00:18:47.481 } 00:18:47.481 ] 00:18:47.481 }' 00:18:47.481 18:17:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.481 18:17:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.740 18:17:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:47.740 18:17:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:47.740 18:17:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.740 18:17:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.740 [2024-12-06 18:17:13.194318] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:47.740 18:17:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.740 18:17:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:18:47.740 18:17:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.740 18:17:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.740 18:17:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.740 18:17:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:48.047 18:17:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.047 18:17:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:18:48.047 18:17:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:48.047 18:17:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:48.047 18:17:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:48.047 18:17:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:48.047 18:17:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:48.047 18:17:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:48.048 18:17:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:48.048 18:17:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:48.048 18:17:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:48.048 18:17:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:48.048 18:17:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:48.048 18:17:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:48.048 18:17:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:48.306 [2024-12-06 18:17:13.622248] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:48.306 /dev/nbd0 00:18:48.306 18:17:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:48.306 18:17:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:48.306 18:17:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:48.306 18:17:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:48.306 18:17:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:48.306 18:17:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:48.306 18:17:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:48.306 18:17:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:48.306 18:17:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:48.306 18:17:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:48.307 18:17:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:48.307 1+0 records in 00:18:48.307 1+0 records out 00:18:48.307 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000397096 s, 10.3 MB/s 00:18:48.307 18:17:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:48.307 18:17:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:48.307 18:17:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:48.307 18:17:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:48.307 18:17:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:48.307 18:17:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:48.307 18:17:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:48.307 18:17:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:18:48.307 18:17:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:18:48.307 18:17:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:18:48.307 18:17:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:18:48.874 512+0 records in 00:18:48.874 512+0 records out 00:18:48.874 67108864 bytes (67 MB, 64 MiB) copied, 0.53203 s, 126 MB/s 00:18:48.874 18:17:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:48.875 18:17:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:48.875 18:17:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:48.875 18:17:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:48.875 18:17:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:48.875 18:17:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:48.875 18:17:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:49.134 [2024-12-06 18:17:14.484379] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:49.134 18:17:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:49.134 18:17:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:49.134 18:17:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:49.134 18:17:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:49.134 18:17:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:49.134 18:17:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:49.134 18:17:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:49.134 18:17:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:49.134 18:17:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:49.134 18:17:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.134 18:17:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.134 [2024-12-06 18:17:14.502255] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:49.134 18:17:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.134 18:17:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:49.134 18:17:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:49.134 18:17:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:49.134 18:17:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:49.134 18:17:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:49.134 18:17:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:49.134 18:17:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:49.134 18:17:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:49.134 18:17:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:49.134 18:17:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.134 18:17:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.134 18:17:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.134 18:17:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.134 18:17:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.134 18:17:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.134 18:17:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:49.134 "name": "raid_bdev1", 00:18:49.134 "uuid": "836a1ba5-e7f7-4868-b9ff-e67a7812e562", 00:18:49.134 "strip_size_kb": 64, 00:18:49.134 "state": "online", 00:18:49.134 "raid_level": "raid5f", 00:18:49.134 "superblock": false, 00:18:49.134 "num_base_bdevs": 3, 00:18:49.134 "num_base_bdevs_discovered": 2, 00:18:49.134 "num_base_bdevs_operational": 2, 00:18:49.134 "base_bdevs_list": [ 00:18:49.134 { 00:18:49.134 "name": null, 00:18:49.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.134 "is_configured": false, 00:18:49.134 "data_offset": 0, 00:18:49.134 "data_size": 65536 00:18:49.134 }, 00:18:49.134 { 00:18:49.134 "name": "BaseBdev2", 00:18:49.134 "uuid": "2d8f2934-b48f-5a15-8459-4fb7f3db0cb1", 00:18:49.134 "is_configured": true, 00:18:49.134 "data_offset": 0, 00:18:49.134 "data_size": 65536 00:18:49.134 }, 00:18:49.134 { 00:18:49.134 "name": "BaseBdev3", 00:18:49.134 "uuid": "90de8492-e857-53b0-81e5-9ab433ef8f4a", 00:18:49.134 "is_configured": true, 00:18:49.134 "data_offset": 0, 00:18:49.134 "data_size": 65536 00:18:49.134 } 00:18:49.134 ] 00:18:49.134 }' 00:18:49.134 18:17:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:49.134 18:17:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.701 18:17:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:49.701 18:17:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.701 18:17:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.701 [2024-12-06 18:17:14.994392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:49.701 [2024-12-06 18:17:15.009998] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:18:49.701 18:17:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.701 18:17:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:49.701 [2024-12-06 18:17:15.017486] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:50.637 18:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:50.637 18:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:50.637 18:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:50.637 18:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:50.637 18:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:50.637 18:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.637 18:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.637 18:17:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.637 18:17:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.637 18:17:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.637 18:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:50.637 "name": "raid_bdev1", 00:18:50.637 "uuid": "836a1ba5-e7f7-4868-b9ff-e67a7812e562", 00:18:50.637 "strip_size_kb": 64, 00:18:50.637 "state": "online", 00:18:50.637 "raid_level": "raid5f", 00:18:50.637 "superblock": false, 00:18:50.637 "num_base_bdevs": 3, 00:18:50.637 "num_base_bdevs_discovered": 3, 00:18:50.637 "num_base_bdevs_operational": 3, 00:18:50.637 "process": { 00:18:50.637 "type": "rebuild", 00:18:50.637 "target": "spare", 00:18:50.637 "progress": { 00:18:50.637 "blocks": 18432, 00:18:50.637 "percent": 14 00:18:50.637 } 00:18:50.637 }, 00:18:50.637 "base_bdevs_list": [ 00:18:50.637 { 00:18:50.637 "name": "spare", 00:18:50.637 "uuid": "24956012-bf60-5f6a-b9e5-4330da56c91b", 00:18:50.637 "is_configured": true, 00:18:50.637 "data_offset": 0, 00:18:50.637 "data_size": 65536 00:18:50.637 }, 00:18:50.637 { 00:18:50.637 "name": "BaseBdev2", 00:18:50.637 "uuid": "2d8f2934-b48f-5a15-8459-4fb7f3db0cb1", 00:18:50.637 "is_configured": true, 00:18:50.637 "data_offset": 0, 00:18:50.637 "data_size": 65536 00:18:50.637 }, 00:18:50.637 { 00:18:50.637 "name": "BaseBdev3", 00:18:50.637 "uuid": "90de8492-e857-53b0-81e5-9ab433ef8f4a", 00:18:50.637 "is_configured": true, 00:18:50.637 "data_offset": 0, 00:18:50.637 "data_size": 65536 00:18:50.637 } 00:18:50.637 ] 00:18:50.637 }' 00:18:50.637 18:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:50.637 18:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:50.637 18:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:50.896 18:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:50.896 18:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:50.896 18:17:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.896 18:17:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.896 [2024-12-06 18:17:16.203692] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:50.896 [2024-12-06 18:17:16.233434] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:50.896 [2024-12-06 18:17:16.233544] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:50.896 [2024-12-06 18:17:16.233575] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:50.896 [2024-12-06 18:17:16.233588] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:50.896 18:17:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.896 18:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:50.896 18:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:50.896 18:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:50.896 18:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:50.896 18:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:50.896 18:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:50.896 18:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:50.896 18:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:50.896 18:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:50.896 18:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:50.896 18:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.896 18:17:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.896 18:17:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.896 18:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.896 18:17:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.896 18:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:50.896 "name": "raid_bdev1", 00:18:50.896 "uuid": "836a1ba5-e7f7-4868-b9ff-e67a7812e562", 00:18:50.896 "strip_size_kb": 64, 00:18:50.896 "state": "online", 00:18:50.896 "raid_level": "raid5f", 00:18:50.896 "superblock": false, 00:18:50.896 "num_base_bdevs": 3, 00:18:50.896 "num_base_bdevs_discovered": 2, 00:18:50.896 "num_base_bdevs_operational": 2, 00:18:50.896 "base_bdevs_list": [ 00:18:50.896 { 00:18:50.896 "name": null, 00:18:50.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.896 "is_configured": false, 00:18:50.896 "data_offset": 0, 00:18:50.896 "data_size": 65536 00:18:50.896 }, 00:18:50.896 { 00:18:50.896 "name": "BaseBdev2", 00:18:50.896 "uuid": "2d8f2934-b48f-5a15-8459-4fb7f3db0cb1", 00:18:50.896 "is_configured": true, 00:18:50.896 "data_offset": 0, 00:18:50.896 "data_size": 65536 00:18:50.896 }, 00:18:50.896 { 00:18:50.896 "name": "BaseBdev3", 00:18:50.896 "uuid": "90de8492-e857-53b0-81e5-9ab433ef8f4a", 00:18:50.896 "is_configured": true, 00:18:50.896 "data_offset": 0, 00:18:50.896 "data_size": 65536 00:18:50.896 } 00:18:50.896 ] 00:18:50.896 }' 00:18:50.896 18:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:50.897 18:17:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.463 18:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:51.463 18:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:51.463 18:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:51.463 18:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:51.463 18:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:51.463 18:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.463 18:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.463 18:17:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.463 18:17:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.463 18:17:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.463 18:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:51.463 "name": "raid_bdev1", 00:18:51.463 "uuid": "836a1ba5-e7f7-4868-b9ff-e67a7812e562", 00:18:51.463 "strip_size_kb": 64, 00:18:51.463 "state": "online", 00:18:51.463 "raid_level": "raid5f", 00:18:51.463 "superblock": false, 00:18:51.463 "num_base_bdevs": 3, 00:18:51.463 "num_base_bdevs_discovered": 2, 00:18:51.463 "num_base_bdevs_operational": 2, 00:18:51.463 "base_bdevs_list": [ 00:18:51.463 { 00:18:51.463 "name": null, 00:18:51.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.463 "is_configured": false, 00:18:51.463 "data_offset": 0, 00:18:51.463 "data_size": 65536 00:18:51.463 }, 00:18:51.463 { 00:18:51.463 "name": "BaseBdev2", 00:18:51.463 "uuid": "2d8f2934-b48f-5a15-8459-4fb7f3db0cb1", 00:18:51.463 "is_configured": true, 00:18:51.463 "data_offset": 0, 00:18:51.463 "data_size": 65536 00:18:51.463 }, 00:18:51.463 { 00:18:51.463 "name": "BaseBdev3", 00:18:51.463 "uuid": "90de8492-e857-53b0-81e5-9ab433ef8f4a", 00:18:51.463 "is_configured": true, 00:18:51.463 "data_offset": 0, 00:18:51.463 "data_size": 65536 00:18:51.463 } 00:18:51.463 ] 00:18:51.463 }' 00:18:51.463 18:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:51.463 18:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:51.463 18:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:51.463 18:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:51.463 18:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:51.463 18:17:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.463 18:17:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.463 [2024-12-06 18:17:16.976966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:51.721 [2024-12-06 18:17:16.992634] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:18:51.721 18:17:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.721 18:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:51.721 [2024-12-06 18:17:17.000378] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:52.802 18:17:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:52.802 18:17:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:52.802 18:17:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:52.802 18:17:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:52.802 18:17:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:52.802 18:17:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.802 18:17:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.802 18:17:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.802 18:17:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:52.802 18:17:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.802 18:17:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:52.802 "name": "raid_bdev1", 00:18:52.802 "uuid": "836a1ba5-e7f7-4868-b9ff-e67a7812e562", 00:18:52.802 "strip_size_kb": 64, 00:18:52.802 "state": "online", 00:18:52.802 "raid_level": "raid5f", 00:18:52.802 "superblock": false, 00:18:52.802 "num_base_bdevs": 3, 00:18:52.802 "num_base_bdevs_discovered": 3, 00:18:52.802 "num_base_bdevs_operational": 3, 00:18:52.802 "process": { 00:18:52.802 "type": "rebuild", 00:18:52.802 "target": "spare", 00:18:52.802 "progress": { 00:18:52.802 "blocks": 18432, 00:18:52.802 "percent": 14 00:18:52.802 } 00:18:52.802 }, 00:18:52.802 "base_bdevs_list": [ 00:18:52.802 { 00:18:52.802 "name": "spare", 00:18:52.802 "uuid": "24956012-bf60-5f6a-b9e5-4330da56c91b", 00:18:52.802 "is_configured": true, 00:18:52.802 "data_offset": 0, 00:18:52.802 "data_size": 65536 00:18:52.802 }, 00:18:52.802 { 00:18:52.802 "name": "BaseBdev2", 00:18:52.802 "uuid": "2d8f2934-b48f-5a15-8459-4fb7f3db0cb1", 00:18:52.802 "is_configured": true, 00:18:52.802 "data_offset": 0, 00:18:52.802 "data_size": 65536 00:18:52.802 }, 00:18:52.802 { 00:18:52.802 "name": "BaseBdev3", 00:18:52.802 "uuid": "90de8492-e857-53b0-81e5-9ab433ef8f4a", 00:18:52.802 "is_configured": true, 00:18:52.802 "data_offset": 0, 00:18:52.802 "data_size": 65536 00:18:52.802 } 00:18:52.802 ] 00:18:52.802 }' 00:18:52.802 18:17:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:52.802 18:17:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:52.802 18:17:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:52.802 18:17:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:52.802 18:17:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:18:52.802 18:17:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:18:52.802 18:17:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:18:52.802 18:17:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=594 00:18:52.802 18:17:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:52.802 18:17:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:52.802 18:17:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:52.802 18:17:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:52.802 18:17:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:52.802 18:17:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:52.802 18:17:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.802 18:17:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.802 18:17:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.802 18:17:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:52.802 18:17:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.802 18:17:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:52.802 "name": "raid_bdev1", 00:18:52.802 "uuid": "836a1ba5-e7f7-4868-b9ff-e67a7812e562", 00:18:52.802 "strip_size_kb": 64, 00:18:52.802 "state": "online", 00:18:52.802 "raid_level": "raid5f", 00:18:52.802 "superblock": false, 00:18:52.802 "num_base_bdevs": 3, 00:18:52.802 "num_base_bdevs_discovered": 3, 00:18:52.802 "num_base_bdevs_operational": 3, 00:18:52.802 "process": { 00:18:52.802 "type": "rebuild", 00:18:52.802 "target": "spare", 00:18:52.802 "progress": { 00:18:52.802 "blocks": 22528, 00:18:52.802 "percent": 17 00:18:52.802 } 00:18:52.802 }, 00:18:52.802 "base_bdevs_list": [ 00:18:52.802 { 00:18:52.802 "name": "spare", 00:18:52.802 "uuid": "24956012-bf60-5f6a-b9e5-4330da56c91b", 00:18:52.802 "is_configured": true, 00:18:52.802 "data_offset": 0, 00:18:52.802 "data_size": 65536 00:18:52.802 }, 00:18:52.802 { 00:18:52.802 "name": "BaseBdev2", 00:18:52.802 "uuid": "2d8f2934-b48f-5a15-8459-4fb7f3db0cb1", 00:18:52.802 "is_configured": true, 00:18:52.802 "data_offset": 0, 00:18:52.802 "data_size": 65536 00:18:52.802 }, 00:18:52.802 { 00:18:52.802 "name": "BaseBdev3", 00:18:52.802 "uuid": "90de8492-e857-53b0-81e5-9ab433ef8f4a", 00:18:52.802 "is_configured": true, 00:18:52.802 "data_offset": 0, 00:18:52.802 "data_size": 65536 00:18:52.802 } 00:18:52.802 ] 00:18:52.802 }' 00:18:52.802 18:17:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:52.802 18:17:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:52.802 18:17:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:53.061 18:17:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:53.061 18:17:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:54.000 18:17:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:54.000 18:17:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:54.001 18:17:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:54.001 18:17:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:54.001 18:17:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:54.001 18:17:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:54.001 18:17:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.001 18:17:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.001 18:17:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.001 18:17:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.001 18:17:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.001 18:17:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:54.001 "name": "raid_bdev1", 00:18:54.001 "uuid": "836a1ba5-e7f7-4868-b9ff-e67a7812e562", 00:18:54.001 "strip_size_kb": 64, 00:18:54.001 "state": "online", 00:18:54.001 "raid_level": "raid5f", 00:18:54.001 "superblock": false, 00:18:54.001 "num_base_bdevs": 3, 00:18:54.001 "num_base_bdevs_discovered": 3, 00:18:54.001 "num_base_bdevs_operational": 3, 00:18:54.001 "process": { 00:18:54.001 "type": "rebuild", 00:18:54.001 "target": "spare", 00:18:54.001 "progress": { 00:18:54.001 "blocks": 47104, 00:18:54.001 "percent": 35 00:18:54.001 } 00:18:54.001 }, 00:18:54.001 "base_bdevs_list": [ 00:18:54.001 { 00:18:54.001 "name": "spare", 00:18:54.001 "uuid": "24956012-bf60-5f6a-b9e5-4330da56c91b", 00:18:54.001 "is_configured": true, 00:18:54.001 "data_offset": 0, 00:18:54.001 "data_size": 65536 00:18:54.001 }, 00:18:54.001 { 00:18:54.001 "name": "BaseBdev2", 00:18:54.001 "uuid": "2d8f2934-b48f-5a15-8459-4fb7f3db0cb1", 00:18:54.001 "is_configured": true, 00:18:54.001 "data_offset": 0, 00:18:54.001 "data_size": 65536 00:18:54.001 }, 00:18:54.001 { 00:18:54.001 "name": "BaseBdev3", 00:18:54.001 "uuid": "90de8492-e857-53b0-81e5-9ab433ef8f4a", 00:18:54.001 "is_configured": true, 00:18:54.001 "data_offset": 0, 00:18:54.001 "data_size": 65536 00:18:54.001 } 00:18:54.001 ] 00:18:54.001 }' 00:18:54.001 18:17:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:54.001 18:17:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:54.001 18:17:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:54.001 18:17:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:54.001 18:17:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:55.378 18:17:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:55.378 18:17:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:55.378 18:17:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:55.378 18:17:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:55.378 18:17:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:55.378 18:17:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:55.378 18:17:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.378 18:17:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.378 18:17:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.378 18:17:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.378 18:17:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.378 18:17:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:55.378 "name": "raid_bdev1", 00:18:55.378 "uuid": "836a1ba5-e7f7-4868-b9ff-e67a7812e562", 00:18:55.378 "strip_size_kb": 64, 00:18:55.378 "state": "online", 00:18:55.378 "raid_level": "raid5f", 00:18:55.378 "superblock": false, 00:18:55.378 "num_base_bdevs": 3, 00:18:55.378 "num_base_bdevs_discovered": 3, 00:18:55.378 "num_base_bdevs_operational": 3, 00:18:55.378 "process": { 00:18:55.378 "type": "rebuild", 00:18:55.378 "target": "spare", 00:18:55.378 "progress": { 00:18:55.378 "blocks": 69632, 00:18:55.378 "percent": 53 00:18:55.378 } 00:18:55.378 }, 00:18:55.378 "base_bdevs_list": [ 00:18:55.378 { 00:18:55.378 "name": "spare", 00:18:55.378 "uuid": "24956012-bf60-5f6a-b9e5-4330da56c91b", 00:18:55.378 "is_configured": true, 00:18:55.378 "data_offset": 0, 00:18:55.378 "data_size": 65536 00:18:55.378 }, 00:18:55.378 { 00:18:55.378 "name": "BaseBdev2", 00:18:55.378 "uuid": "2d8f2934-b48f-5a15-8459-4fb7f3db0cb1", 00:18:55.378 "is_configured": true, 00:18:55.378 "data_offset": 0, 00:18:55.378 "data_size": 65536 00:18:55.378 }, 00:18:55.378 { 00:18:55.378 "name": "BaseBdev3", 00:18:55.378 "uuid": "90de8492-e857-53b0-81e5-9ab433ef8f4a", 00:18:55.378 "is_configured": true, 00:18:55.378 "data_offset": 0, 00:18:55.378 "data_size": 65536 00:18:55.378 } 00:18:55.378 ] 00:18:55.378 }' 00:18:55.378 18:17:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:55.378 18:17:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:55.378 18:17:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:55.378 18:17:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:55.378 18:17:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:56.315 18:17:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:56.315 18:17:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:56.315 18:17:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:56.315 18:17:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:56.315 18:17:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:56.315 18:17:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:56.315 18:17:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.315 18:17:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.315 18:17:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.315 18:17:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.315 18:17:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.315 18:17:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:56.315 "name": "raid_bdev1", 00:18:56.315 "uuid": "836a1ba5-e7f7-4868-b9ff-e67a7812e562", 00:18:56.315 "strip_size_kb": 64, 00:18:56.315 "state": "online", 00:18:56.315 "raid_level": "raid5f", 00:18:56.315 "superblock": false, 00:18:56.315 "num_base_bdevs": 3, 00:18:56.315 "num_base_bdevs_discovered": 3, 00:18:56.315 "num_base_bdevs_operational": 3, 00:18:56.315 "process": { 00:18:56.315 "type": "rebuild", 00:18:56.315 "target": "spare", 00:18:56.315 "progress": { 00:18:56.315 "blocks": 94208, 00:18:56.315 "percent": 71 00:18:56.315 } 00:18:56.315 }, 00:18:56.315 "base_bdevs_list": [ 00:18:56.315 { 00:18:56.315 "name": "spare", 00:18:56.315 "uuid": "24956012-bf60-5f6a-b9e5-4330da56c91b", 00:18:56.315 "is_configured": true, 00:18:56.315 "data_offset": 0, 00:18:56.315 "data_size": 65536 00:18:56.315 }, 00:18:56.315 { 00:18:56.315 "name": "BaseBdev2", 00:18:56.315 "uuid": "2d8f2934-b48f-5a15-8459-4fb7f3db0cb1", 00:18:56.315 "is_configured": true, 00:18:56.315 "data_offset": 0, 00:18:56.315 "data_size": 65536 00:18:56.315 }, 00:18:56.315 { 00:18:56.315 "name": "BaseBdev3", 00:18:56.315 "uuid": "90de8492-e857-53b0-81e5-9ab433ef8f4a", 00:18:56.315 "is_configured": true, 00:18:56.315 "data_offset": 0, 00:18:56.315 "data_size": 65536 00:18:56.315 } 00:18:56.315 ] 00:18:56.315 }' 00:18:56.315 18:17:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:56.315 18:17:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:56.315 18:17:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:56.315 18:17:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:56.315 18:17:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:57.320 18:17:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:57.320 18:17:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:57.320 18:17:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:57.320 18:17:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:57.320 18:17:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:57.320 18:17:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:57.579 18:17:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.579 18:17:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.579 18:17:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.579 18:17:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.579 18:17:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.579 18:17:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:57.579 "name": "raid_bdev1", 00:18:57.579 "uuid": "836a1ba5-e7f7-4868-b9ff-e67a7812e562", 00:18:57.579 "strip_size_kb": 64, 00:18:57.579 "state": "online", 00:18:57.579 "raid_level": "raid5f", 00:18:57.579 "superblock": false, 00:18:57.579 "num_base_bdevs": 3, 00:18:57.579 "num_base_bdevs_discovered": 3, 00:18:57.579 "num_base_bdevs_operational": 3, 00:18:57.579 "process": { 00:18:57.579 "type": "rebuild", 00:18:57.579 "target": "spare", 00:18:57.579 "progress": { 00:18:57.579 "blocks": 116736, 00:18:57.579 "percent": 89 00:18:57.579 } 00:18:57.579 }, 00:18:57.579 "base_bdevs_list": [ 00:18:57.579 { 00:18:57.579 "name": "spare", 00:18:57.579 "uuid": "24956012-bf60-5f6a-b9e5-4330da56c91b", 00:18:57.579 "is_configured": true, 00:18:57.579 "data_offset": 0, 00:18:57.579 "data_size": 65536 00:18:57.579 }, 00:18:57.579 { 00:18:57.579 "name": "BaseBdev2", 00:18:57.579 "uuid": "2d8f2934-b48f-5a15-8459-4fb7f3db0cb1", 00:18:57.579 "is_configured": true, 00:18:57.579 "data_offset": 0, 00:18:57.579 "data_size": 65536 00:18:57.579 }, 00:18:57.579 { 00:18:57.579 "name": "BaseBdev3", 00:18:57.579 "uuid": "90de8492-e857-53b0-81e5-9ab433ef8f4a", 00:18:57.579 "is_configured": true, 00:18:57.579 "data_offset": 0, 00:18:57.579 "data_size": 65536 00:18:57.579 } 00:18:57.579 ] 00:18:57.579 }' 00:18:57.579 18:17:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:57.579 18:17:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:57.579 18:17:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:57.579 18:17:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:57.579 18:17:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:58.147 [2024-12-06 18:17:23.483822] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:58.147 [2024-12-06 18:17:23.483928] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:58.147 [2024-12-06 18:17:23.484007] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:58.719 18:17:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:58.719 18:17:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:58.719 18:17:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:58.719 18:17:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:58.719 18:17:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:58.719 18:17:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:58.719 18:17:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.719 18:17:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.719 18:17:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.719 18:17:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.719 18:17:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.719 18:17:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:58.719 "name": "raid_bdev1", 00:18:58.719 "uuid": "836a1ba5-e7f7-4868-b9ff-e67a7812e562", 00:18:58.719 "strip_size_kb": 64, 00:18:58.719 "state": "online", 00:18:58.719 "raid_level": "raid5f", 00:18:58.719 "superblock": false, 00:18:58.719 "num_base_bdevs": 3, 00:18:58.719 "num_base_bdevs_discovered": 3, 00:18:58.719 "num_base_bdevs_operational": 3, 00:18:58.719 "base_bdevs_list": [ 00:18:58.719 { 00:18:58.719 "name": "spare", 00:18:58.719 "uuid": "24956012-bf60-5f6a-b9e5-4330da56c91b", 00:18:58.719 "is_configured": true, 00:18:58.719 "data_offset": 0, 00:18:58.719 "data_size": 65536 00:18:58.719 }, 00:18:58.719 { 00:18:58.719 "name": "BaseBdev2", 00:18:58.719 "uuid": "2d8f2934-b48f-5a15-8459-4fb7f3db0cb1", 00:18:58.719 "is_configured": true, 00:18:58.719 "data_offset": 0, 00:18:58.719 "data_size": 65536 00:18:58.719 }, 00:18:58.719 { 00:18:58.719 "name": "BaseBdev3", 00:18:58.719 "uuid": "90de8492-e857-53b0-81e5-9ab433ef8f4a", 00:18:58.719 "is_configured": true, 00:18:58.719 "data_offset": 0, 00:18:58.719 "data_size": 65536 00:18:58.719 } 00:18:58.719 ] 00:18:58.719 }' 00:18:58.719 18:17:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:58.719 18:17:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:58.719 18:17:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:58.719 18:17:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:58.719 18:17:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:18:58.719 18:17:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:58.719 18:17:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:58.719 18:17:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:58.719 18:17:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:58.719 18:17:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:58.719 18:17:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.719 18:17:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.719 18:17:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.719 18:17:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.719 18:17:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.719 18:17:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:58.719 "name": "raid_bdev1", 00:18:58.719 "uuid": "836a1ba5-e7f7-4868-b9ff-e67a7812e562", 00:18:58.719 "strip_size_kb": 64, 00:18:58.719 "state": "online", 00:18:58.719 "raid_level": "raid5f", 00:18:58.719 "superblock": false, 00:18:58.719 "num_base_bdevs": 3, 00:18:58.719 "num_base_bdevs_discovered": 3, 00:18:58.719 "num_base_bdevs_operational": 3, 00:18:58.719 "base_bdevs_list": [ 00:18:58.719 { 00:18:58.719 "name": "spare", 00:18:58.719 "uuid": "24956012-bf60-5f6a-b9e5-4330da56c91b", 00:18:58.719 "is_configured": true, 00:18:58.719 "data_offset": 0, 00:18:58.719 "data_size": 65536 00:18:58.719 }, 00:18:58.719 { 00:18:58.719 "name": "BaseBdev2", 00:18:58.720 "uuid": "2d8f2934-b48f-5a15-8459-4fb7f3db0cb1", 00:18:58.720 "is_configured": true, 00:18:58.720 "data_offset": 0, 00:18:58.720 "data_size": 65536 00:18:58.720 }, 00:18:58.720 { 00:18:58.720 "name": "BaseBdev3", 00:18:58.720 "uuid": "90de8492-e857-53b0-81e5-9ab433ef8f4a", 00:18:58.720 "is_configured": true, 00:18:58.720 "data_offset": 0, 00:18:58.720 "data_size": 65536 00:18:58.720 } 00:18:58.720 ] 00:18:58.720 }' 00:18:58.720 18:17:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:58.979 18:17:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:58.979 18:17:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:58.979 18:17:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:58.979 18:17:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:58.979 18:17:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:58.979 18:17:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:58.979 18:17:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:58.979 18:17:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:58.979 18:17:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:58.979 18:17:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:58.979 18:17:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:58.979 18:17:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:58.979 18:17:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:58.979 18:17:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.979 18:17:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.979 18:17:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.979 18:17:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.979 18:17:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.979 18:17:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:58.979 "name": "raid_bdev1", 00:18:58.979 "uuid": "836a1ba5-e7f7-4868-b9ff-e67a7812e562", 00:18:58.979 "strip_size_kb": 64, 00:18:58.979 "state": "online", 00:18:58.979 "raid_level": "raid5f", 00:18:58.979 "superblock": false, 00:18:58.979 "num_base_bdevs": 3, 00:18:58.979 "num_base_bdevs_discovered": 3, 00:18:58.979 "num_base_bdevs_operational": 3, 00:18:58.979 "base_bdevs_list": [ 00:18:58.979 { 00:18:58.979 "name": "spare", 00:18:58.979 "uuid": "24956012-bf60-5f6a-b9e5-4330da56c91b", 00:18:58.979 "is_configured": true, 00:18:58.979 "data_offset": 0, 00:18:58.979 "data_size": 65536 00:18:58.979 }, 00:18:58.979 { 00:18:58.979 "name": "BaseBdev2", 00:18:58.979 "uuid": "2d8f2934-b48f-5a15-8459-4fb7f3db0cb1", 00:18:58.979 "is_configured": true, 00:18:58.979 "data_offset": 0, 00:18:58.979 "data_size": 65536 00:18:58.979 }, 00:18:58.979 { 00:18:58.979 "name": "BaseBdev3", 00:18:58.979 "uuid": "90de8492-e857-53b0-81e5-9ab433ef8f4a", 00:18:58.979 "is_configured": true, 00:18:58.979 "data_offset": 0, 00:18:58.979 "data_size": 65536 00:18:58.979 } 00:18:58.979 ] 00:18:58.979 }' 00:18:58.979 18:17:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:58.979 18:17:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.546 18:17:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:59.546 18:17:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.546 18:17:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.546 [2024-12-06 18:17:24.875727] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:59.546 [2024-12-06 18:17:24.875793] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:59.546 [2024-12-06 18:17:24.875920] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:59.546 [2024-12-06 18:17:24.876032] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:59.546 [2024-12-06 18:17:24.876056] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:59.546 18:17:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.546 18:17:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.546 18:17:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.546 18:17:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.546 18:17:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:18:59.546 18:17:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.546 18:17:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:59.546 18:17:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:59.546 18:17:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:59.546 18:17:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:59.546 18:17:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:59.546 18:17:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:59.546 18:17:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:59.546 18:17:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:59.546 18:17:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:59.546 18:17:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:59.546 18:17:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:59.546 18:17:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:59.546 18:17:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:59.804 /dev/nbd0 00:18:59.804 18:17:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:59.804 18:17:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:59.804 18:17:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:59.804 18:17:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:59.804 18:17:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:59.804 18:17:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:59.804 18:17:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:59.804 18:17:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:59.804 18:17:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:59.804 18:17:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:59.804 18:17:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:59.804 1+0 records in 00:18:59.804 1+0 records out 00:18:59.804 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294804 s, 13.9 MB/s 00:18:59.804 18:17:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:59.804 18:17:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:59.804 18:17:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:59.804 18:17:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:59.804 18:17:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:59.804 18:17:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:59.804 18:17:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:59.804 18:17:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:00.370 /dev/nbd1 00:19:00.370 18:17:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:00.370 18:17:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:00.370 18:17:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:00.370 18:17:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:19:00.370 18:17:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:00.370 18:17:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:00.370 18:17:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:00.370 18:17:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:19:00.370 18:17:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:00.370 18:17:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:00.370 18:17:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:00.370 1+0 records in 00:19:00.370 1+0 records out 00:19:00.370 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000423768 s, 9.7 MB/s 00:19:00.370 18:17:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:00.370 18:17:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:19:00.370 18:17:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:00.370 18:17:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:00.370 18:17:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:19:00.370 18:17:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:00.370 18:17:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:00.370 18:17:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:00.370 18:17:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:00.370 18:17:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:00.370 18:17:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:00.370 18:17:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:00.370 18:17:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:19:00.370 18:17:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:00.370 18:17:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:00.630 18:17:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:00.630 18:17:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:00.630 18:17:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:00.630 18:17:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:00.630 18:17:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:00.630 18:17:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:00.630 18:17:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:00.630 18:17:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:00.630 18:17:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:00.630 18:17:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:00.888 18:17:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:01.147 18:17:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:01.147 18:17:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:01.147 18:17:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:01.147 18:17:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:01.147 18:17:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:01.147 18:17:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:01.147 18:17:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:01.147 18:17:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:19:01.147 18:17:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81987 00:19:01.147 18:17:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81987 ']' 00:19:01.147 18:17:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81987 00:19:01.147 18:17:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:19:01.147 18:17:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:01.147 18:17:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81987 00:19:01.147 killing process with pid 81987 00:19:01.147 18:17:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:01.147 18:17:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:01.147 18:17:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81987' 00:19:01.147 18:17:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81987 00:19:01.147 Received shutdown signal, test time was about 60.000000 seconds 00:19:01.147 00:19:01.147 Latency(us) 00:19:01.147 [2024-12-06T18:17:26.667Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.147 [2024-12-06T18:17:26.668Z] =================================================================================================================== 00:19:01.148 [2024-12-06T18:17:26.668Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:01.148 18:17:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81987 00:19:01.148 [2024-12-06 18:17:26.443321] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:01.406 [2024-12-06 18:17:26.804030] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:02.785 ************************************ 00:19:02.785 END TEST raid5f_rebuild_test 00:19:02.785 ************************************ 00:19:02.785 18:17:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:19:02.785 00:19:02.785 real 0m16.538s 00:19:02.785 user 0m21.262s 00:19:02.785 sys 0m2.049s 00:19:02.785 18:17:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:02.785 18:17:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.785 18:17:27 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:19:02.785 18:17:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:02.785 18:17:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:02.785 18:17:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:02.785 ************************************ 00:19:02.785 START TEST raid5f_rebuild_test_sb 00:19:02.785 ************************************ 00:19:02.785 18:17:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:19:02.785 18:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:19:02.785 18:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:19:02.785 18:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:02.785 18:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:02.785 18:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:02.785 18:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:02.785 18:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:02.785 18:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:02.785 18:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:02.785 18:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:02.785 18:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:02.785 18:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:02.785 18:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:02.785 18:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:19:02.785 18:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:02.785 18:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:02.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:02.785 18:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:02.785 18:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:02.785 18:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:02.785 18:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:02.785 18:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:02.785 18:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:02.785 18:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:02.785 18:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:19:02.785 18:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:19:02.785 18:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:19:02.785 18:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:19:02.785 18:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:02.785 18:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:02.785 18:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82439 00:19:02.785 18:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82439 00:19:02.785 18:17:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82439 ']' 00:19:02.785 18:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:02.785 18:17:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:02.785 18:17:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:02.785 18:17:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:02.785 18:17:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:02.785 18:17:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:02.785 [2024-12-06 18:17:28.040241] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:19:02.785 [2024-12-06 18:17:28.040673] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82439 ] 00:19:02.785 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:02.785 Zero copy mechanism will not be used. 00:19:02.785 [2024-12-06 18:17:28.223155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.044 [2024-12-06 18:17:28.356845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:03.301 [2024-12-06 18:17:28.568995] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:03.301 [2024-12-06 18:17:28.569321] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:03.559 18:17:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:03.559 18:17:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:19:03.559 18:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:03.559 18:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:03.559 18:17:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.559 18:17:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.559 BaseBdev1_malloc 00:19:03.559 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.559 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:03.559 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.559 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.559 [2024-12-06 18:17:29.034859] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:03.559 [2024-12-06 18:17:29.035082] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:03.559 [2024-12-06 18:17:29.035158] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:03.559 [2024-12-06 18:17:29.035436] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:03.559 [2024-12-06 18:17:29.038200] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:03.559 [2024-12-06 18:17:29.038375] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:03.559 BaseBdev1 00:19:03.559 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.559 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:03.559 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:03.559 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.559 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.816 BaseBdev2_malloc 00:19:03.816 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.816 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:03.816 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.816 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.816 [2024-12-06 18:17:29.090874] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:03.817 [2024-12-06 18:17:29.090971] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:03.817 [2024-12-06 18:17:29.091007] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:03.817 [2024-12-06 18:17:29.091026] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:03.817 [2024-12-06 18:17:29.093913] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:03.817 [2024-12-06 18:17:29.093963] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:03.817 BaseBdev2 00:19:03.817 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.817 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:03.817 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:03.817 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.817 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.817 BaseBdev3_malloc 00:19:03.817 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.817 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:03.817 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.817 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.817 [2024-12-06 18:17:29.157623] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:03.817 [2024-12-06 18:17:29.157830] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:03.817 [2024-12-06 18:17:29.157907] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:03.817 [2024-12-06 18:17:29.158033] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:03.817 [2024-12-06 18:17:29.160906] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:03.817 [2024-12-06 18:17:29.161077] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:03.817 BaseBdev3 00:19:03.817 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.817 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:03.817 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.817 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.817 spare_malloc 00:19:03.817 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.817 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:03.817 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.817 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.817 spare_delay 00:19:03.817 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.817 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:03.817 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.817 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.817 [2024-12-06 18:17:29.218268] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:03.817 [2024-12-06 18:17:29.218341] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:03.817 [2024-12-06 18:17:29.218368] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:19:03.817 [2024-12-06 18:17:29.218386] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:03.817 [2024-12-06 18:17:29.221178] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:03.817 [2024-12-06 18:17:29.221232] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:03.817 spare 00:19:03.817 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.817 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:19:03.817 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.817 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.817 [2024-12-06 18:17:29.226365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:03.817 [2024-12-06 18:17:29.228800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:03.817 [2024-12-06 18:17:29.228894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:03.817 [2024-12-06 18:17:29.229144] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:03.817 [2024-12-06 18:17:29.229163] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:03.817 [2024-12-06 18:17:29.229486] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:03.817 [2024-12-06 18:17:29.234662] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:03.817 [2024-12-06 18:17:29.234705] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:03.817 [2024-12-06 18:17:29.234962] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:03.817 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.817 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:03.817 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:03.817 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:03.817 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:03.817 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:03.817 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:03.817 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:03.817 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:03.817 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:03.817 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:03.817 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.817 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.817 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.817 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.817 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.817 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:03.817 "name": "raid_bdev1", 00:19:03.817 "uuid": "0b33ebc0-11bf-4a40-a61d-c7abc62b8081", 00:19:03.817 "strip_size_kb": 64, 00:19:03.817 "state": "online", 00:19:03.817 "raid_level": "raid5f", 00:19:03.817 "superblock": true, 00:19:03.817 "num_base_bdevs": 3, 00:19:03.817 "num_base_bdevs_discovered": 3, 00:19:03.817 "num_base_bdevs_operational": 3, 00:19:03.817 "base_bdevs_list": [ 00:19:03.817 { 00:19:03.817 "name": "BaseBdev1", 00:19:03.817 "uuid": "5cfa9f55-c40b-51d7-aeca-cd672a7bc4f0", 00:19:03.817 "is_configured": true, 00:19:03.817 "data_offset": 2048, 00:19:03.817 "data_size": 63488 00:19:03.817 }, 00:19:03.817 { 00:19:03.817 "name": "BaseBdev2", 00:19:03.817 "uuid": "9244d5a5-7092-5cf4-a36a-59212587c34d", 00:19:03.817 "is_configured": true, 00:19:03.817 "data_offset": 2048, 00:19:03.817 "data_size": 63488 00:19:03.817 }, 00:19:03.817 { 00:19:03.817 "name": "BaseBdev3", 00:19:03.817 "uuid": "0449e133-e5a5-5a2e-818c-2c947a8eb9f0", 00:19:03.817 "is_configured": true, 00:19:03.817 "data_offset": 2048, 00:19:03.817 "data_size": 63488 00:19:03.817 } 00:19:03.817 ] 00:19:03.817 }' 00:19:03.817 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:03.817 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:04.385 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:04.385 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.385 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:04.385 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:04.385 [2024-12-06 18:17:29.768982] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:04.385 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.385 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:19:04.385 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.385 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:04.385 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.385 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:04.385 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.385 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:19:04.385 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:04.385 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:04.385 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:04.385 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:04.385 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:04.385 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:04.385 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:04.385 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:04.385 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:04.385 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:04.385 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:04.385 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:04.385 18:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:04.953 [2024-12-06 18:17:30.192920] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:04.953 /dev/nbd0 00:19:04.953 18:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:04.953 18:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:04.953 18:17:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:04.953 18:17:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:19:04.953 18:17:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:04.953 18:17:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:04.953 18:17:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:04.953 18:17:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:19:04.953 18:17:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:04.953 18:17:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:04.953 18:17:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:04.953 1+0 records in 00:19:04.953 1+0 records out 00:19:04.953 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00067467 s, 6.1 MB/s 00:19:04.953 18:17:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:04.953 18:17:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:19:04.953 18:17:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:04.953 18:17:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:04.953 18:17:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:19:04.953 18:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:04.953 18:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:04.953 18:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:19:04.953 18:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:19:04.953 18:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:19:04.953 18:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:19:05.522 496+0 records in 00:19:05.522 496+0 records out 00:19:05.522 65011712 bytes (65 MB, 62 MiB) copied, 0.481857 s, 135 MB/s 00:19:05.522 18:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:05.522 18:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:05.522 18:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:05.522 18:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:05.522 18:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:05.522 18:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:05.522 18:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:05.780 18:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:05.780 [2024-12-06 18:17:31.081419] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:05.780 18:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:05.780 18:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:05.780 18:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:05.780 18:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:05.780 18:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:05.780 18:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:05.780 18:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:05.780 18:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:05.780 18:17:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.780 18:17:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.780 [2024-12-06 18:17:31.095672] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:05.780 18:17:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.780 18:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:05.780 18:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:05.780 18:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:05.780 18:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:05.780 18:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:05.780 18:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:05.780 18:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:05.780 18:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:05.780 18:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:05.780 18:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:05.780 18:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.780 18:17:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.780 18:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.780 18:17:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.780 18:17:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.780 18:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:05.780 "name": "raid_bdev1", 00:19:05.780 "uuid": "0b33ebc0-11bf-4a40-a61d-c7abc62b8081", 00:19:05.780 "strip_size_kb": 64, 00:19:05.780 "state": "online", 00:19:05.780 "raid_level": "raid5f", 00:19:05.780 "superblock": true, 00:19:05.780 "num_base_bdevs": 3, 00:19:05.780 "num_base_bdevs_discovered": 2, 00:19:05.780 "num_base_bdevs_operational": 2, 00:19:05.780 "base_bdevs_list": [ 00:19:05.780 { 00:19:05.780 "name": null, 00:19:05.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.780 "is_configured": false, 00:19:05.780 "data_offset": 0, 00:19:05.780 "data_size": 63488 00:19:05.780 }, 00:19:05.780 { 00:19:05.780 "name": "BaseBdev2", 00:19:05.780 "uuid": "9244d5a5-7092-5cf4-a36a-59212587c34d", 00:19:05.781 "is_configured": true, 00:19:05.781 "data_offset": 2048, 00:19:05.781 "data_size": 63488 00:19:05.781 }, 00:19:05.781 { 00:19:05.781 "name": "BaseBdev3", 00:19:05.781 "uuid": "0449e133-e5a5-5a2e-818c-2c947a8eb9f0", 00:19:05.781 "is_configured": true, 00:19:05.781 "data_offset": 2048, 00:19:05.781 "data_size": 63488 00:19:05.781 } 00:19:05.781 ] 00:19:05.781 }' 00:19:05.781 18:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:05.781 18:17:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.348 18:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:06.348 18:17:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.348 18:17:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.348 [2024-12-06 18:17:31.635824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:06.348 [2024-12-06 18:17:31.651202] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:19:06.348 18:17:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.348 18:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:06.348 [2024-12-06 18:17:31.658943] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:07.284 18:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:07.284 18:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:07.284 18:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:07.284 18:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:07.284 18:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:07.284 18:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.284 18:17:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.284 18:17:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.284 18:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.284 18:17:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.284 18:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:07.284 "name": "raid_bdev1", 00:19:07.284 "uuid": "0b33ebc0-11bf-4a40-a61d-c7abc62b8081", 00:19:07.284 "strip_size_kb": 64, 00:19:07.284 "state": "online", 00:19:07.284 "raid_level": "raid5f", 00:19:07.284 "superblock": true, 00:19:07.284 "num_base_bdevs": 3, 00:19:07.284 "num_base_bdevs_discovered": 3, 00:19:07.284 "num_base_bdevs_operational": 3, 00:19:07.284 "process": { 00:19:07.284 "type": "rebuild", 00:19:07.284 "target": "spare", 00:19:07.284 "progress": { 00:19:07.284 "blocks": 18432, 00:19:07.284 "percent": 14 00:19:07.284 } 00:19:07.284 }, 00:19:07.284 "base_bdevs_list": [ 00:19:07.284 { 00:19:07.284 "name": "spare", 00:19:07.284 "uuid": "c3d61f1b-fec2-5654-b169-1497deca368a", 00:19:07.284 "is_configured": true, 00:19:07.284 "data_offset": 2048, 00:19:07.284 "data_size": 63488 00:19:07.284 }, 00:19:07.284 { 00:19:07.284 "name": "BaseBdev2", 00:19:07.284 "uuid": "9244d5a5-7092-5cf4-a36a-59212587c34d", 00:19:07.284 "is_configured": true, 00:19:07.284 "data_offset": 2048, 00:19:07.284 "data_size": 63488 00:19:07.284 }, 00:19:07.284 { 00:19:07.284 "name": "BaseBdev3", 00:19:07.284 "uuid": "0449e133-e5a5-5a2e-818c-2c947a8eb9f0", 00:19:07.284 "is_configured": true, 00:19:07.284 "data_offset": 2048, 00:19:07.284 "data_size": 63488 00:19:07.284 } 00:19:07.284 ] 00:19:07.284 }' 00:19:07.284 18:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:07.284 18:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:07.284 18:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:07.543 18:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:07.543 18:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:07.543 18:17:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.543 18:17:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.543 [2024-12-06 18:17:32.825029] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:07.543 [2024-12-06 18:17:32.874561] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:07.543 [2024-12-06 18:17:32.874832] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:07.543 [2024-12-06 18:17:32.874982] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:07.543 [2024-12-06 18:17:32.875036] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:07.543 18:17:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.543 18:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:07.543 18:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:07.543 18:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:07.543 18:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:07.543 18:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:07.543 18:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:07.543 18:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:07.543 18:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:07.543 18:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:07.543 18:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:07.543 18:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.543 18:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.543 18:17:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.543 18:17:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.543 18:17:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.543 18:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:07.543 "name": "raid_bdev1", 00:19:07.543 "uuid": "0b33ebc0-11bf-4a40-a61d-c7abc62b8081", 00:19:07.543 "strip_size_kb": 64, 00:19:07.543 "state": "online", 00:19:07.543 "raid_level": "raid5f", 00:19:07.543 "superblock": true, 00:19:07.543 "num_base_bdevs": 3, 00:19:07.543 "num_base_bdevs_discovered": 2, 00:19:07.543 "num_base_bdevs_operational": 2, 00:19:07.543 "base_bdevs_list": [ 00:19:07.543 { 00:19:07.543 "name": null, 00:19:07.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.543 "is_configured": false, 00:19:07.543 "data_offset": 0, 00:19:07.543 "data_size": 63488 00:19:07.543 }, 00:19:07.543 { 00:19:07.543 "name": "BaseBdev2", 00:19:07.543 "uuid": "9244d5a5-7092-5cf4-a36a-59212587c34d", 00:19:07.543 "is_configured": true, 00:19:07.543 "data_offset": 2048, 00:19:07.543 "data_size": 63488 00:19:07.543 }, 00:19:07.543 { 00:19:07.543 "name": "BaseBdev3", 00:19:07.543 "uuid": "0449e133-e5a5-5a2e-818c-2c947a8eb9f0", 00:19:07.543 "is_configured": true, 00:19:07.543 "data_offset": 2048, 00:19:07.543 "data_size": 63488 00:19:07.543 } 00:19:07.543 ] 00:19:07.543 }' 00:19:07.543 18:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:07.543 18:17:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.110 18:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:08.110 18:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:08.110 18:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:08.110 18:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:08.110 18:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:08.110 18:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.110 18:17:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.110 18:17:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.110 18:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.110 18:17:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.110 18:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:08.110 "name": "raid_bdev1", 00:19:08.110 "uuid": "0b33ebc0-11bf-4a40-a61d-c7abc62b8081", 00:19:08.110 "strip_size_kb": 64, 00:19:08.110 "state": "online", 00:19:08.110 "raid_level": "raid5f", 00:19:08.110 "superblock": true, 00:19:08.110 "num_base_bdevs": 3, 00:19:08.110 "num_base_bdevs_discovered": 2, 00:19:08.110 "num_base_bdevs_operational": 2, 00:19:08.110 "base_bdevs_list": [ 00:19:08.110 { 00:19:08.110 "name": null, 00:19:08.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.110 "is_configured": false, 00:19:08.110 "data_offset": 0, 00:19:08.110 "data_size": 63488 00:19:08.110 }, 00:19:08.110 { 00:19:08.110 "name": "BaseBdev2", 00:19:08.110 "uuid": "9244d5a5-7092-5cf4-a36a-59212587c34d", 00:19:08.110 "is_configured": true, 00:19:08.110 "data_offset": 2048, 00:19:08.110 "data_size": 63488 00:19:08.110 }, 00:19:08.110 { 00:19:08.110 "name": "BaseBdev3", 00:19:08.110 "uuid": "0449e133-e5a5-5a2e-818c-2c947a8eb9f0", 00:19:08.110 "is_configured": true, 00:19:08.110 "data_offset": 2048, 00:19:08.110 "data_size": 63488 00:19:08.110 } 00:19:08.110 ] 00:19:08.110 }' 00:19:08.110 18:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:08.110 18:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:08.110 18:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:08.110 18:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:08.110 18:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:08.110 18:17:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.110 18:17:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.369 [2024-12-06 18:17:33.630600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:08.369 [2024-12-06 18:17:33.645643] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:19:08.370 18:17:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.370 18:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:08.370 [2024-12-06 18:17:33.653216] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:09.306 18:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:09.306 18:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:09.306 18:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:09.306 18:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:09.306 18:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:09.306 18:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.306 18:17:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.306 18:17:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.306 18:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.306 18:17:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.306 18:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:09.306 "name": "raid_bdev1", 00:19:09.306 "uuid": "0b33ebc0-11bf-4a40-a61d-c7abc62b8081", 00:19:09.306 "strip_size_kb": 64, 00:19:09.306 "state": "online", 00:19:09.306 "raid_level": "raid5f", 00:19:09.306 "superblock": true, 00:19:09.306 "num_base_bdevs": 3, 00:19:09.306 "num_base_bdevs_discovered": 3, 00:19:09.306 "num_base_bdevs_operational": 3, 00:19:09.306 "process": { 00:19:09.306 "type": "rebuild", 00:19:09.306 "target": "spare", 00:19:09.306 "progress": { 00:19:09.306 "blocks": 18432, 00:19:09.306 "percent": 14 00:19:09.306 } 00:19:09.306 }, 00:19:09.306 "base_bdevs_list": [ 00:19:09.306 { 00:19:09.306 "name": "spare", 00:19:09.306 "uuid": "c3d61f1b-fec2-5654-b169-1497deca368a", 00:19:09.306 "is_configured": true, 00:19:09.306 "data_offset": 2048, 00:19:09.306 "data_size": 63488 00:19:09.306 }, 00:19:09.306 { 00:19:09.306 "name": "BaseBdev2", 00:19:09.306 "uuid": "9244d5a5-7092-5cf4-a36a-59212587c34d", 00:19:09.306 "is_configured": true, 00:19:09.306 "data_offset": 2048, 00:19:09.306 "data_size": 63488 00:19:09.306 }, 00:19:09.306 { 00:19:09.306 "name": "BaseBdev3", 00:19:09.306 "uuid": "0449e133-e5a5-5a2e-818c-2c947a8eb9f0", 00:19:09.306 "is_configured": true, 00:19:09.306 "data_offset": 2048, 00:19:09.306 "data_size": 63488 00:19:09.306 } 00:19:09.306 ] 00:19:09.306 }' 00:19:09.306 18:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:09.306 18:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:09.306 18:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:09.306 18:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:09.306 18:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:09.306 18:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:09.306 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:09.306 18:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:19:09.306 18:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:19:09.306 18:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=610 00:19:09.306 18:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:09.306 18:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:09.306 18:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:09.306 18:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:09.306 18:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:09.306 18:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:09.306 18:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.306 18:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.306 18:17:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.306 18:17:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.565 18:17:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.565 18:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:09.565 "name": "raid_bdev1", 00:19:09.565 "uuid": "0b33ebc0-11bf-4a40-a61d-c7abc62b8081", 00:19:09.565 "strip_size_kb": 64, 00:19:09.565 "state": "online", 00:19:09.565 "raid_level": "raid5f", 00:19:09.565 "superblock": true, 00:19:09.565 "num_base_bdevs": 3, 00:19:09.565 "num_base_bdevs_discovered": 3, 00:19:09.565 "num_base_bdevs_operational": 3, 00:19:09.565 "process": { 00:19:09.565 "type": "rebuild", 00:19:09.565 "target": "spare", 00:19:09.565 "progress": { 00:19:09.565 "blocks": 22528, 00:19:09.565 "percent": 17 00:19:09.565 } 00:19:09.565 }, 00:19:09.565 "base_bdevs_list": [ 00:19:09.565 { 00:19:09.565 "name": "spare", 00:19:09.565 "uuid": "c3d61f1b-fec2-5654-b169-1497deca368a", 00:19:09.565 "is_configured": true, 00:19:09.565 "data_offset": 2048, 00:19:09.565 "data_size": 63488 00:19:09.565 }, 00:19:09.565 { 00:19:09.565 "name": "BaseBdev2", 00:19:09.565 "uuid": "9244d5a5-7092-5cf4-a36a-59212587c34d", 00:19:09.565 "is_configured": true, 00:19:09.565 "data_offset": 2048, 00:19:09.565 "data_size": 63488 00:19:09.565 }, 00:19:09.565 { 00:19:09.565 "name": "BaseBdev3", 00:19:09.565 "uuid": "0449e133-e5a5-5a2e-818c-2c947a8eb9f0", 00:19:09.565 "is_configured": true, 00:19:09.565 "data_offset": 2048, 00:19:09.565 "data_size": 63488 00:19:09.565 } 00:19:09.565 ] 00:19:09.565 }' 00:19:09.565 18:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:09.565 18:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:09.565 18:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:09.565 18:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:09.565 18:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:10.569 18:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:10.569 18:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:10.569 18:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:10.569 18:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:10.569 18:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:10.569 18:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:10.569 18:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.569 18:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.569 18:17:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.569 18:17:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.569 18:17:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.569 18:17:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:10.569 "name": "raid_bdev1", 00:19:10.569 "uuid": "0b33ebc0-11bf-4a40-a61d-c7abc62b8081", 00:19:10.569 "strip_size_kb": 64, 00:19:10.569 "state": "online", 00:19:10.569 "raid_level": "raid5f", 00:19:10.569 "superblock": true, 00:19:10.569 "num_base_bdevs": 3, 00:19:10.569 "num_base_bdevs_discovered": 3, 00:19:10.569 "num_base_bdevs_operational": 3, 00:19:10.569 "process": { 00:19:10.569 "type": "rebuild", 00:19:10.569 "target": "spare", 00:19:10.569 "progress": { 00:19:10.569 "blocks": 45056, 00:19:10.569 "percent": 35 00:19:10.569 } 00:19:10.569 }, 00:19:10.569 "base_bdevs_list": [ 00:19:10.569 { 00:19:10.569 "name": "spare", 00:19:10.569 "uuid": "c3d61f1b-fec2-5654-b169-1497deca368a", 00:19:10.569 "is_configured": true, 00:19:10.569 "data_offset": 2048, 00:19:10.569 "data_size": 63488 00:19:10.569 }, 00:19:10.569 { 00:19:10.569 "name": "BaseBdev2", 00:19:10.569 "uuid": "9244d5a5-7092-5cf4-a36a-59212587c34d", 00:19:10.569 "is_configured": true, 00:19:10.569 "data_offset": 2048, 00:19:10.569 "data_size": 63488 00:19:10.569 }, 00:19:10.569 { 00:19:10.569 "name": "BaseBdev3", 00:19:10.569 "uuid": "0449e133-e5a5-5a2e-818c-2c947a8eb9f0", 00:19:10.569 "is_configured": true, 00:19:10.569 "data_offset": 2048, 00:19:10.569 "data_size": 63488 00:19:10.569 } 00:19:10.569 ] 00:19:10.569 }' 00:19:10.569 18:17:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:10.569 18:17:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:10.569 18:17:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:10.828 18:17:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:10.828 18:17:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:11.766 18:17:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:11.766 18:17:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:11.766 18:17:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:11.766 18:17:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:11.766 18:17:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:11.766 18:17:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:11.766 18:17:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.766 18:17:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.766 18:17:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.766 18:17:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.766 18:17:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.766 18:17:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:11.766 "name": "raid_bdev1", 00:19:11.766 "uuid": "0b33ebc0-11bf-4a40-a61d-c7abc62b8081", 00:19:11.766 "strip_size_kb": 64, 00:19:11.766 "state": "online", 00:19:11.766 "raid_level": "raid5f", 00:19:11.766 "superblock": true, 00:19:11.766 "num_base_bdevs": 3, 00:19:11.766 "num_base_bdevs_discovered": 3, 00:19:11.766 "num_base_bdevs_operational": 3, 00:19:11.766 "process": { 00:19:11.766 "type": "rebuild", 00:19:11.766 "target": "spare", 00:19:11.766 "progress": { 00:19:11.766 "blocks": 69632, 00:19:11.766 "percent": 54 00:19:11.766 } 00:19:11.766 }, 00:19:11.766 "base_bdevs_list": [ 00:19:11.766 { 00:19:11.766 "name": "spare", 00:19:11.766 "uuid": "c3d61f1b-fec2-5654-b169-1497deca368a", 00:19:11.766 "is_configured": true, 00:19:11.766 "data_offset": 2048, 00:19:11.766 "data_size": 63488 00:19:11.766 }, 00:19:11.766 { 00:19:11.766 "name": "BaseBdev2", 00:19:11.766 "uuid": "9244d5a5-7092-5cf4-a36a-59212587c34d", 00:19:11.767 "is_configured": true, 00:19:11.767 "data_offset": 2048, 00:19:11.767 "data_size": 63488 00:19:11.767 }, 00:19:11.767 { 00:19:11.767 "name": "BaseBdev3", 00:19:11.767 "uuid": "0449e133-e5a5-5a2e-818c-2c947a8eb9f0", 00:19:11.767 "is_configured": true, 00:19:11.767 "data_offset": 2048, 00:19:11.767 "data_size": 63488 00:19:11.767 } 00:19:11.767 ] 00:19:11.767 }' 00:19:11.767 18:17:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:11.767 18:17:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:11.767 18:17:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:11.767 18:17:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:11.767 18:17:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:13.144 18:17:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:13.144 18:17:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:13.144 18:17:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:13.144 18:17:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:13.144 18:17:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:13.144 18:17:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:13.144 18:17:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.144 18:17:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.144 18:17:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.144 18:17:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.144 18:17:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.144 18:17:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:13.144 "name": "raid_bdev1", 00:19:13.144 "uuid": "0b33ebc0-11bf-4a40-a61d-c7abc62b8081", 00:19:13.144 "strip_size_kb": 64, 00:19:13.144 "state": "online", 00:19:13.144 "raid_level": "raid5f", 00:19:13.144 "superblock": true, 00:19:13.144 "num_base_bdevs": 3, 00:19:13.144 "num_base_bdevs_discovered": 3, 00:19:13.144 "num_base_bdevs_operational": 3, 00:19:13.144 "process": { 00:19:13.144 "type": "rebuild", 00:19:13.144 "target": "spare", 00:19:13.144 "progress": { 00:19:13.144 "blocks": 92160, 00:19:13.144 "percent": 72 00:19:13.144 } 00:19:13.144 }, 00:19:13.144 "base_bdevs_list": [ 00:19:13.144 { 00:19:13.144 "name": "spare", 00:19:13.144 "uuid": "c3d61f1b-fec2-5654-b169-1497deca368a", 00:19:13.144 "is_configured": true, 00:19:13.144 "data_offset": 2048, 00:19:13.144 "data_size": 63488 00:19:13.144 }, 00:19:13.144 { 00:19:13.144 "name": "BaseBdev2", 00:19:13.144 "uuid": "9244d5a5-7092-5cf4-a36a-59212587c34d", 00:19:13.144 "is_configured": true, 00:19:13.144 "data_offset": 2048, 00:19:13.144 "data_size": 63488 00:19:13.144 }, 00:19:13.144 { 00:19:13.144 "name": "BaseBdev3", 00:19:13.144 "uuid": "0449e133-e5a5-5a2e-818c-2c947a8eb9f0", 00:19:13.144 "is_configured": true, 00:19:13.144 "data_offset": 2048, 00:19:13.144 "data_size": 63488 00:19:13.144 } 00:19:13.144 ] 00:19:13.144 }' 00:19:13.144 18:17:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:13.144 18:17:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:13.144 18:17:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:13.144 18:17:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:13.144 18:17:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:14.097 18:17:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:14.097 18:17:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:14.097 18:17:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:14.097 18:17:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:14.097 18:17:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:14.097 18:17:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:14.097 18:17:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.097 18:17:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.097 18:17:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.097 18:17:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.097 18:17:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.097 18:17:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:14.097 "name": "raid_bdev1", 00:19:14.097 "uuid": "0b33ebc0-11bf-4a40-a61d-c7abc62b8081", 00:19:14.097 "strip_size_kb": 64, 00:19:14.097 "state": "online", 00:19:14.097 "raid_level": "raid5f", 00:19:14.097 "superblock": true, 00:19:14.097 "num_base_bdevs": 3, 00:19:14.097 "num_base_bdevs_discovered": 3, 00:19:14.097 "num_base_bdevs_operational": 3, 00:19:14.097 "process": { 00:19:14.098 "type": "rebuild", 00:19:14.098 "target": "spare", 00:19:14.098 "progress": { 00:19:14.098 "blocks": 116736, 00:19:14.098 "percent": 91 00:19:14.098 } 00:19:14.098 }, 00:19:14.098 "base_bdevs_list": [ 00:19:14.098 { 00:19:14.098 "name": "spare", 00:19:14.098 "uuid": "c3d61f1b-fec2-5654-b169-1497deca368a", 00:19:14.098 "is_configured": true, 00:19:14.098 "data_offset": 2048, 00:19:14.098 "data_size": 63488 00:19:14.098 }, 00:19:14.098 { 00:19:14.098 "name": "BaseBdev2", 00:19:14.098 "uuid": "9244d5a5-7092-5cf4-a36a-59212587c34d", 00:19:14.098 "is_configured": true, 00:19:14.098 "data_offset": 2048, 00:19:14.098 "data_size": 63488 00:19:14.098 }, 00:19:14.098 { 00:19:14.098 "name": "BaseBdev3", 00:19:14.098 "uuid": "0449e133-e5a5-5a2e-818c-2c947a8eb9f0", 00:19:14.098 "is_configured": true, 00:19:14.098 "data_offset": 2048, 00:19:14.098 "data_size": 63488 00:19:14.098 } 00:19:14.098 ] 00:19:14.098 }' 00:19:14.098 18:17:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:14.098 18:17:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:14.098 18:17:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:14.386 18:17:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:14.386 18:17:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:14.644 [2024-12-06 18:17:39.934825] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:14.645 [2024-12-06 18:17:39.935151] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:14.645 [2024-12-06 18:17:39.935332] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:15.211 18:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:15.211 18:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:15.211 18:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:15.211 18:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:15.211 18:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:15.211 18:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:15.211 18:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.211 18:17:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.211 18:17:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.211 18:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.211 18:17:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.211 18:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:15.211 "name": "raid_bdev1", 00:19:15.211 "uuid": "0b33ebc0-11bf-4a40-a61d-c7abc62b8081", 00:19:15.211 "strip_size_kb": 64, 00:19:15.211 "state": "online", 00:19:15.211 "raid_level": "raid5f", 00:19:15.211 "superblock": true, 00:19:15.211 "num_base_bdevs": 3, 00:19:15.211 "num_base_bdevs_discovered": 3, 00:19:15.211 "num_base_bdevs_operational": 3, 00:19:15.211 "base_bdevs_list": [ 00:19:15.211 { 00:19:15.211 "name": "spare", 00:19:15.211 "uuid": "c3d61f1b-fec2-5654-b169-1497deca368a", 00:19:15.211 "is_configured": true, 00:19:15.211 "data_offset": 2048, 00:19:15.211 "data_size": 63488 00:19:15.211 }, 00:19:15.211 { 00:19:15.211 "name": "BaseBdev2", 00:19:15.211 "uuid": "9244d5a5-7092-5cf4-a36a-59212587c34d", 00:19:15.211 "is_configured": true, 00:19:15.211 "data_offset": 2048, 00:19:15.211 "data_size": 63488 00:19:15.211 }, 00:19:15.211 { 00:19:15.211 "name": "BaseBdev3", 00:19:15.211 "uuid": "0449e133-e5a5-5a2e-818c-2c947a8eb9f0", 00:19:15.211 "is_configured": true, 00:19:15.211 "data_offset": 2048, 00:19:15.211 "data_size": 63488 00:19:15.211 } 00:19:15.211 ] 00:19:15.211 }' 00:19:15.211 18:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:15.211 18:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:15.211 18:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:15.470 18:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:15.470 18:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:19:15.470 18:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:15.470 18:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:15.470 18:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:15.470 18:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:15.470 18:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:15.470 18:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.470 18:17:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.470 18:17:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.470 18:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.470 18:17:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.470 18:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:15.470 "name": "raid_bdev1", 00:19:15.470 "uuid": "0b33ebc0-11bf-4a40-a61d-c7abc62b8081", 00:19:15.470 "strip_size_kb": 64, 00:19:15.470 "state": "online", 00:19:15.470 "raid_level": "raid5f", 00:19:15.470 "superblock": true, 00:19:15.470 "num_base_bdevs": 3, 00:19:15.470 "num_base_bdevs_discovered": 3, 00:19:15.470 "num_base_bdevs_operational": 3, 00:19:15.470 "base_bdevs_list": [ 00:19:15.470 { 00:19:15.470 "name": "spare", 00:19:15.470 "uuid": "c3d61f1b-fec2-5654-b169-1497deca368a", 00:19:15.470 "is_configured": true, 00:19:15.470 "data_offset": 2048, 00:19:15.470 "data_size": 63488 00:19:15.470 }, 00:19:15.470 { 00:19:15.470 "name": "BaseBdev2", 00:19:15.470 "uuid": "9244d5a5-7092-5cf4-a36a-59212587c34d", 00:19:15.470 "is_configured": true, 00:19:15.470 "data_offset": 2048, 00:19:15.470 "data_size": 63488 00:19:15.470 }, 00:19:15.470 { 00:19:15.470 "name": "BaseBdev3", 00:19:15.470 "uuid": "0449e133-e5a5-5a2e-818c-2c947a8eb9f0", 00:19:15.470 "is_configured": true, 00:19:15.470 "data_offset": 2048, 00:19:15.470 "data_size": 63488 00:19:15.470 } 00:19:15.470 ] 00:19:15.470 }' 00:19:15.470 18:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:15.470 18:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:15.470 18:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:15.470 18:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:15.470 18:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:15.470 18:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:15.470 18:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:15.470 18:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:15.470 18:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:15.470 18:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:15.470 18:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:15.470 18:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:15.470 18:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:15.470 18:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:15.470 18:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.470 18:17:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.470 18:17:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.470 18:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.470 18:17:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.470 18:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:15.470 "name": "raid_bdev1", 00:19:15.470 "uuid": "0b33ebc0-11bf-4a40-a61d-c7abc62b8081", 00:19:15.470 "strip_size_kb": 64, 00:19:15.470 "state": "online", 00:19:15.470 "raid_level": "raid5f", 00:19:15.470 "superblock": true, 00:19:15.470 "num_base_bdevs": 3, 00:19:15.470 "num_base_bdevs_discovered": 3, 00:19:15.470 "num_base_bdevs_operational": 3, 00:19:15.470 "base_bdevs_list": [ 00:19:15.470 { 00:19:15.470 "name": "spare", 00:19:15.470 "uuid": "c3d61f1b-fec2-5654-b169-1497deca368a", 00:19:15.470 "is_configured": true, 00:19:15.470 "data_offset": 2048, 00:19:15.470 "data_size": 63488 00:19:15.470 }, 00:19:15.470 { 00:19:15.470 "name": "BaseBdev2", 00:19:15.470 "uuid": "9244d5a5-7092-5cf4-a36a-59212587c34d", 00:19:15.470 "is_configured": true, 00:19:15.470 "data_offset": 2048, 00:19:15.470 "data_size": 63488 00:19:15.470 }, 00:19:15.470 { 00:19:15.470 "name": "BaseBdev3", 00:19:15.470 "uuid": "0449e133-e5a5-5a2e-818c-2c947a8eb9f0", 00:19:15.470 "is_configured": true, 00:19:15.470 "data_offset": 2048, 00:19:15.470 "data_size": 63488 00:19:15.470 } 00:19:15.470 ] 00:19:15.470 }' 00:19:15.470 18:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:15.470 18:17:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.035 18:17:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:16.035 18:17:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.035 18:17:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.035 [2024-12-06 18:17:41.462908] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:16.035 [2024-12-06 18:17:41.463076] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:16.035 [2024-12-06 18:17:41.463206] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:16.035 [2024-12-06 18:17:41.463313] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:16.035 [2024-12-06 18:17:41.463337] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:16.035 18:17:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.035 18:17:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.035 18:17:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:19:16.035 18:17:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.035 18:17:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.035 18:17:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.035 18:17:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:16.035 18:17:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:16.035 18:17:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:16.035 18:17:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:16.035 18:17:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:16.035 18:17:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:16.035 18:17:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:16.035 18:17:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:16.035 18:17:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:16.035 18:17:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:16.035 18:17:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:16.035 18:17:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:16.035 18:17:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:16.603 /dev/nbd0 00:19:16.603 18:17:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:16.604 18:17:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:16.604 18:17:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:16.604 18:17:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:19:16.604 18:17:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:16.604 18:17:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:16.604 18:17:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:16.604 18:17:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:19:16.604 18:17:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:16.604 18:17:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:16.604 18:17:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:16.604 1+0 records in 00:19:16.604 1+0 records out 00:19:16.604 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000359202 s, 11.4 MB/s 00:19:16.604 18:17:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:16.604 18:17:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:19:16.604 18:17:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:16.604 18:17:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:16.604 18:17:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:19:16.604 18:17:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:16.604 18:17:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:16.604 18:17:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:16.863 /dev/nbd1 00:19:16.863 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:16.863 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:16.863 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:16.863 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:19:16.863 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:16.863 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:16.863 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:16.863 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:19:16.863 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:16.863 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:16.863 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:16.863 1+0 records in 00:19:16.863 1+0 records out 00:19:16.863 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394649 s, 10.4 MB/s 00:19:16.863 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:16.863 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:19:16.863 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:16.863 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:16.863 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:19:16.863 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:16.863 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:16.863 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:16.863 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:16.863 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:16.863 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:16.863 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:16.863 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:16.863 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:16.863 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:17.122 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:17.122 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:17.122 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:17.122 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:17.122 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:17.122 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:17.122 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:17.122 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:17.122 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:17.122 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:17.689 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:17.689 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:17.689 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:17.689 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:17.689 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:17.689 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:17.689 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:17.689 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:17.689 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:17.689 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:17.689 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.689 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.689 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.689 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:17.689 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.689 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.689 [2024-12-06 18:17:42.975239] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:17.689 [2024-12-06 18:17:42.975317] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:17.689 [2024-12-06 18:17:42.975348] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:17.689 [2024-12-06 18:17:42.975366] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:17.689 [2024-12-06 18:17:42.978307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:17.689 [2024-12-06 18:17:42.978506] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:17.689 [2024-12-06 18:17:42.978653] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:17.689 [2024-12-06 18:17:42.978725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:17.689 [2024-12-06 18:17:42.978920] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:17.689 [2024-12-06 18:17:42.979070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:17.689 spare 00:19:17.689 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.689 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:17.689 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.689 18:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.689 [2024-12-06 18:17:43.079209] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:17.689 [2024-12-06 18:17:43.079283] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:17.689 [2024-12-06 18:17:43.079709] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:19:17.689 [2024-12-06 18:17:43.084615] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:17.690 [2024-12-06 18:17:43.084642] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:17.690 [2024-12-06 18:17:43.084931] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:17.690 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.690 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:17.690 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:17.690 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:17.690 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:17.690 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:17.690 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:17.690 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:17.690 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:17.690 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:17.690 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:17.690 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.690 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.690 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.690 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.690 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.690 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:17.690 "name": "raid_bdev1", 00:19:17.690 "uuid": "0b33ebc0-11bf-4a40-a61d-c7abc62b8081", 00:19:17.690 "strip_size_kb": 64, 00:19:17.690 "state": "online", 00:19:17.690 "raid_level": "raid5f", 00:19:17.690 "superblock": true, 00:19:17.690 "num_base_bdevs": 3, 00:19:17.690 "num_base_bdevs_discovered": 3, 00:19:17.690 "num_base_bdevs_operational": 3, 00:19:17.690 "base_bdevs_list": [ 00:19:17.690 { 00:19:17.690 "name": "spare", 00:19:17.690 "uuid": "c3d61f1b-fec2-5654-b169-1497deca368a", 00:19:17.690 "is_configured": true, 00:19:17.690 "data_offset": 2048, 00:19:17.690 "data_size": 63488 00:19:17.690 }, 00:19:17.690 { 00:19:17.690 "name": "BaseBdev2", 00:19:17.690 "uuid": "9244d5a5-7092-5cf4-a36a-59212587c34d", 00:19:17.690 "is_configured": true, 00:19:17.690 "data_offset": 2048, 00:19:17.690 "data_size": 63488 00:19:17.690 }, 00:19:17.690 { 00:19:17.690 "name": "BaseBdev3", 00:19:17.690 "uuid": "0449e133-e5a5-5a2e-818c-2c947a8eb9f0", 00:19:17.690 "is_configured": true, 00:19:17.690 "data_offset": 2048, 00:19:17.690 "data_size": 63488 00:19:17.690 } 00:19:17.690 ] 00:19:17.690 }' 00:19:17.690 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:17.690 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.308 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:18.308 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:18.308 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:18.308 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:18.308 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:18.308 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.308 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.308 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.308 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.308 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.308 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:18.308 "name": "raid_bdev1", 00:19:18.308 "uuid": "0b33ebc0-11bf-4a40-a61d-c7abc62b8081", 00:19:18.308 "strip_size_kb": 64, 00:19:18.308 "state": "online", 00:19:18.308 "raid_level": "raid5f", 00:19:18.308 "superblock": true, 00:19:18.308 "num_base_bdevs": 3, 00:19:18.308 "num_base_bdevs_discovered": 3, 00:19:18.308 "num_base_bdevs_operational": 3, 00:19:18.308 "base_bdevs_list": [ 00:19:18.308 { 00:19:18.308 "name": "spare", 00:19:18.308 "uuid": "c3d61f1b-fec2-5654-b169-1497deca368a", 00:19:18.308 "is_configured": true, 00:19:18.308 "data_offset": 2048, 00:19:18.308 "data_size": 63488 00:19:18.308 }, 00:19:18.308 { 00:19:18.308 "name": "BaseBdev2", 00:19:18.308 "uuid": "9244d5a5-7092-5cf4-a36a-59212587c34d", 00:19:18.308 "is_configured": true, 00:19:18.308 "data_offset": 2048, 00:19:18.308 "data_size": 63488 00:19:18.308 }, 00:19:18.308 { 00:19:18.308 "name": "BaseBdev3", 00:19:18.308 "uuid": "0449e133-e5a5-5a2e-818c-2c947a8eb9f0", 00:19:18.308 "is_configured": true, 00:19:18.308 "data_offset": 2048, 00:19:18.308 "data_size": 63488 00:19:18.308 } 00:19:18.308 ] 00:19:18.308 }' 00:19:18.308 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:18.308 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:18.308 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:18.308 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:18.308 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:18.308 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.308 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.308 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.308 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.308 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:18.308 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:18.308 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.308 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.308 [2024-12-06 18:17:43.794917] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:18.308 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.308 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:18.308 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:18.308 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:18.308 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:18.308 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:18.308 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:18.309 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:18.309 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:18.309 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:18.309 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:18.309 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.309 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.309 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.309 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.589 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.589 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:18.589 "name": "raid_bdev1", 00:19:18.589 "uuid": "0b33ebc0-11bf-4a40-a61d-c7abc62b8081", 00:19:18.589 "strip_size_kb": 64, 00:19:18.589 "state": "online", 00:19:18.589 "raid_level": "raid5f", 00:19:18.589 "superblock": true, 00:19:18.589 "num_base_bdevs": 3, 00:19:18.589 "num_base_bdevs_discovered": 2, 00:19:18.589 "num_base_bdevs_operational": 2, 00:19:18.589 "base_bdevs_list": [ 00:19:18.589 { 00:19:18.589 "name": null, 00:19:18.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.589 "is_configured": false, 00:19:18.589 "data_offset": 0, 00:19:18.589 "data_size": 63488 00:19:18.589 }, 00:19:18.589 { 00:19:18.589 "name": "BaseBdev2", 00:19:18.589 "uuid": "9244d5a5-7092-5cf4-a36a-59212587c34d", 00:19:18.589 "is_configured": true, 00:19:18.589 "data_offset": 2048, 00:19:18.589 "data_size": 63488 00:19:18.589 }, 00:19:18.589 { 00:19:18.589 "name": "BaseBdev3", 00:19:18.589 "uuid": "0449e133-e5a5-5a2e-818c-2c947a8eb9f0", 00:19:18.589 "is_configured": true, 00:19:18.589 "data_offset": 2048, 00:19:18.589 "data_size": 63488 00:19:18.589 } 00:19:18.589 ] 00:19:18.589 }' 00:19:18.589 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:18.589 18:17:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.847 18:17:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:18.847 18:17:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.847 18:17:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.847 [2024-12-06 18:17:44.327090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:18.847 [2024-12-06 18:17:44.327320] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:18.847 [2024-12-06 18:17:44.327348] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:18.847 [2024-12-06 18:17:44.327401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:18.847 [2024-12-06 18:17:44.341643] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:19:18.847 18:17:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.847 18:17:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:18.847 [2024-12-06 18:17:44.348937] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:20.233 18:17:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:20.233 18:17:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:20.233 18:17:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:20.233 18:17:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:20.233 18:17:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:20.233 18:17:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.233 18:17:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.233 18:17:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.233 18:17:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.233 18:17:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.233 18:17:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:20.233 "name": "raid_bdev1", 00:19:20.233 "uuid": "0b33ebc0-11bf-4a40-a61d-c7abc62b8081", 00:19:20.233 "strip_size_kb": 64, 00:19:20.233 "state": "online", 00:19:20.233 "raid_level": "raid5f", 00:19:20.233 "superblock": true, 00:19:20.233 "num_base_bdevs": 3, 00:19:20.233 "num_base_bdevs_discovered": 3, 00:19:20.233 "num_base_bdevs_operational": 3, 00:19:20.233 "process": { 00:19:20.233 "type": "rebuild", 00:19:20.233 "target": "spare", 00:19:20.233 "progress": { 00:19:20.233 "blocks": 18432, 00:19:20.233 "percent": 14 00:19:20.233 } 00:19:20.233 }, 00:19:20.233 "base_bdevs_list": [ 00:19:20.233 { 00:19:20.233 "name": "spare", 00:19:20.233 "uuid": "c3d61f1b-fec2-5654-b169-1497deca368a", 00:19:20.233 "is_configured": true, 00:19:20.233 "data_offset": 2048, 00:19:20.233 "data_size": 63488 00:19:20.233 }, 00:19:20.233 { 00:19:20.233 "name": "BaseBdev2", 00:19:20.233 "uuid": "9244d5a5-7092-5cf4-a36a-59212587c34d", 00:19:20.233 "is_configured": true, 00:19:20.233 "data_offset": 2048, 00:19:20.233 "data_size": 63488 00:19:20.233 }, 00:19:20.233 { 00:19:20.233 "name": "BaseBdev3", 00:19:20.233 "uuid": "0449e133-e5a5-5a2e-818c-2c947a8eb9f0", 00:19:20.233 "is_configured": true, 00:19:20.233 "data_offset": 2048, 00:19:20.233 "data_size": 63488 00:19:20.233 } 00:19:20.233 ] 00:19:20.233 }' 00:19:20.233 18:17:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:20.233 18:17:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:20.233 18:17:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:20.233 18:17:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:20.233 18:17:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:20.233 18:17:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.233 18:17:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.233 [2024-12-06 18:17:45.515025] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:20.233 [2024-12-06 18:17:45.564536] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:20.233 [2024-12-06 18:17:45.564640] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:20.233 [2024-12-06 18:17:45.564665] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:20.233 [2024-12-06 18:17:45.564695] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:20.233 18:17:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.233 18:17:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:20.233 18:17:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:20.233 18:17:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:20.233 18:17:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:20.233 18:17:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:20.233 18:17:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:20.233 18:17:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.233 18:17:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.233 18:17:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.233 18:17:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.233 18:17:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.233 18:17:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.233 18:17:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.233 18:17:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.233 18:17:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.233 18:17:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.233 "name": "raid_bdev1", 00:19:20.233 "uuid": "0b33ebc0-11bf-4a40-a61d-c7abc62b8081", 00:19:20.233 "strip_size_kb": 64, 00:19:20.233 "state": "online", 00:19:20.233 "raid_level": "raid5f", 00:19:20.233 "superblock": true, 00:19:20.233 "num_base_bdevs": 3, 00:19:20.233 "num_base_bdevs_discovered": 2, 00:19:20.233 "num_base_bdevs_operational": 2, 00:19:20.233 "base_bdevs_list": [ 00:19:20.233 { 00:19:20.233 "name": null, 00:19:20.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.233 "is_configured": false, 00:19:20.233 "data_offset": 0, 00:19:20.233 "data_size": 63488 00:19:20.233 }, 00:19:20.233 { 00:19:20.233 "name": "BaseBdev2", 00:19:20.233 "uuid": "9244d5a5-7092-5cf4-a36a-59212587c34d", 00:19:20.233 "is_configured": true, 00:19:20.233 "data_offset": 2048, 00:19:20.233 "data_size": 63488 00:19:20.233 }, 00:19:20.233 { 00:19:20.233 "name": "BaseBdev3", 00:19:20.233 "uuid": "0449e133-e5a5-5a2e-818c-2c947a8eb9f0", 00:19:20.233 "is_configured": true, 00:19:20.233 "data_offset": 2048, 00:19:20.233 "data_size": 63488 00:19:20.233 } 00:19:20.233 ] 00:19:20.233 }' 00:19:20.233 18:17:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.233 18:17:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.800 18:17:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:20.800 18:17:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.800 18:17:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.800 [2024-12-06 18:17:46.119920] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:20.800 [2024-12-06 18:17:46.120140] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:20.800 [2024-12-06 18:17:46.120182] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:19:20.800 [2024-12-06 18:17:46.120204] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.800 [2024-12-06 18:17:46.120839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.800 [2024-12-06 18:17:46.120891] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:20.800 [2024-12-06 18:17:46.121017] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:20.801 [2024-12-06 18:17:46.121045] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:20.801 [2024-12-06 18:17:46.121060] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:20.801 [2024-12-06 18:17:46.121098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:20.801 [2024-12-06 18:17:46.135475] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:19:20.801 spare 00:19:20.801 18:17:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.801 18:17:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:20.801 [2024-12-06 18:17:46.142741] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:21.734 18:17:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:21.734 18:17:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:21.734 18:17:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:21.734 18:17:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:21.734 18:17:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:21.734 18:17:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.734 18:17:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.734 18:17:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.734 18:17:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.734 18:17:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.734 18:17:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:21.734 "name": "raid_bdev1", 00:19:21.734 "uuid": "0b33ebc0-11bf-4a40-a61d-c7abc62b8081", 00:19:21.734 "strip_size_kb": 64, 00:19:21.734 "state": "online", 00:19:21.734 "raid_level": "raid5f", 00:19:21.734 "superblock": true, 00:19:21.734 "num_base_bdevs": 3, 00:19:21.734 "num_base_bdevs_discovered": 3, 00:19:21.734 "num_base_bdevs_operational": 3, 00:19:21.734 "process": { 00:19:21.734 "type": "rebuild", 00:19:21.734 "target": "spare", 00:19:21.734 "progress": { 00:19:21.734 "blocks": 18432, 00:19:21.734 "percent": 14 00:19:21.734 } 00:19:21.734 }, 00:19:21.734 "base_bdevs_list": [ 00:19:21.734 { 00:19:21.734 "name": "spare", 00:19:21.734 "uuid": "c3d61f1b-fec2-5654-b169-1497deca368a", 00:19:21.734 "is_configured": true, 00:19:21.734 "data_offset": 2048, 00:19:21.734 "data_size": 63488 00:19:21.734 }, 00:19:21.734 { 00:19:21.734 "name": "BaseBdev2", 00:19:21.734 "uuid": "9244d5a5-7092-5cf4-a36a-59212587c34d", 00:19:21.734 "is_configured": true, 00:19:21.734 "data_offset": 2048, 00:19:21.734 "data_size": 63488 00:19:21.734 }, 00:19:21.734 { 00:19:21.734 "name": "BaseBdev3", 00:19:21.734 "uuid": "0449e133-e5a5-5a2e-818c-2c947a8eb9f0", 00:19:21.734 "is_configured": true, 00:19:21.734 "data_offset": 2048, 00:19:21.734 "data_size": 63488 00:19:21.734 } 00:19:21.734 ] 00:19:21.734 }' 00:19:21.734 18:17:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:21.734 18:17:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:21.734 18:17:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:21.993 18:17:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:21.993 18:17:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:21.993 18:17:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.993 18:17:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.993 [2024-12-06 18:17:47.309852] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:21.993 [2024-12-06 18:17:47.358152] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:21.993 [2024-12-06 18:17:47.358468] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:21.993 [2024-12-06 18:17:47.358657] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:21.993 [2024-12-06 18:17:47.358709] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:21.993 18:17:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.993 18:17:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:21.993 18:17:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:21.993 18:17:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:21.993 18:17:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:21.993 18:17:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:21.993 18:17:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:21.993 18:17:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:21.993 18:17:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:21.993 18:17:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:21.993 18:17:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:21.993 18:17:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.993 18:17:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.993 18:17:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.993 18:17:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.993 18:17:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.993 18:17:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:21.993 "name": "raid_bdev1", 00:19:21.993 "uuid": "0b33ebc0-11bf-4a40-a61d-c7abc62b8081", 00:19:21.993 "strip_size_kb": 64, 00:19:21.993 "state": "online", 00:19:21.993 "raid_level": "raid5f", 00:19:21.993 "superblock": true, 00:19:21.993 "num_base_bdevs": 3, 00:19:21.993 "num_base_bdevs_discovered": 2, 00:19:21.993 "num_base_bdevs_operational": 2, 00:19:21.993 "base_bdevs_list": [ 00:19:21.993 { 00:19:21.993 "name": null, 00:19:21.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.993 "is_configured": false, 00:19:21.993 "data_offset": 0, 00:19:21.993 "data_size": 63488 00:19:21.993 }, 00:19:21.993 { 00:19:21.993 "name": "BaseBdev2", 00:19:21.993 "uuid": "9244d5a5-7092-5cf4-a36a-59212587c34d", 00:19:21.993 "is_configured": true, 00:19:21.993 "data_offset": 2048, 00:19:21.993 "data_size": 63488 00:19:21.993 }, 00:19:21.993 { 00:19:21.993 "name": "BaseBdev3", 00:19:21.993 "uuid": "0449e133-e5a5-5a2e-818c-2c947a8eb9f0", 00:19:21.993 "is_configured": true, 00:19:21.993 "data_offset": 2048, 00:19:21.993 "data_size": 63488 00:19:21.993 } 00:19:21.993 ] 00:19:21.993 }' 00:19:21.993 18:17:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:21.993 18:17:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.561 18:17:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:22.561 18:17:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:22.561 18:17:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:22.561 18:17:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:22.561 18:17:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:22.561 18:17:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.561 18:17:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.561 18:17:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.561 18:17:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.561 18:17:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.561 18:17:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:22.561 "name": "raid_bdev1", 00:19:22.561 "uuid": "0b33ebc0-11bf-4a40-a61d-c7abc62b8081", 00:19:22.561 "strip_size_kb": 64, 00:19:22.561 "state": "online", 00:19:22.561 "raid_level": "raid5f", 00:19:22.561 "superblock": true, 00:19:22.561 "num_base_bdevs": 3, 00:19:22.561 "num_base_bdevs_discovered": 2, 00:19:22.561 "num_base_bdevs_operational": 2, 00:19:22.561 "base_bdevs_list": [ 00:19:22.561 { 00:19:22.561 "name": null, 00:19:22.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.561 "is_configured": false, 00:19:22.561 "data_offset": 0, 00:19:22.561 "data_size": 63488 00:19:22.561 }, 00:19:22.561 { 00:19:22.561 "name": "BaseBdev2", 00:19:22.561 "uuid": "9244d5a5-7092-5cf4-a36a-59212587c34d", 00:19:22.561 "is_configured": true, 00:19:22.561 "data_offset": 2048, 00:19:22.561 "data_size": 63488 00:19:22.561 }, 00:19:22.561 { 00:19:22.561 "name": "BaseBdev3", 00:19:22.561 "uuid": "0449e133-e5a5-5a2e-818c-2c947a8eb9f0", 00:19:22.561 "is_configured": true, 00:19:22.561 "data_offset": 2048, 00:19:22.561 "data_size": 63488 00:19:22.561 } 00:19:22.561 ] 00:19:22.561 }' 00:19:22.561 18:17:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:22.561 18:17:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:22.561 18:17:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:22.561 18:17:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:22.561 18:17:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:22.561 18:17:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.561 18:17:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.561 18:17:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.561 18:17:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:22.561 18:17:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.561 18:17:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.561 [2024-12-06 18:17:48.069859] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:22.561 [2024-12-06 18:17:48.069924] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:22.561 [2024-12-06 18:17:48.069960] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:19:22.561 [2024-12-06 18:17:48.069975] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:22.561 [2024-12-06 18:17:48.070547] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:22.561 [2024-12-06 18:17:48.070579] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:22.561 [2024-12-06 18:17:48.070692] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:22.561 [2024-12-06 18:17:48.070720] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:22.561 [2024-12-06 18:17:48.070747] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:22.561 [2024-12-06 18:17:48.070760] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:22.561 BaseBdev1 00:19:22.561 18:17:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.561 18:17:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:23.565 18:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:23.565 18:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:23.565 18:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:23.565 18:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:23.565 18:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:23.565 18:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:23.565 18:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:23.565 18:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:23.565 18:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:23.565 18:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:23.825 18:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.825 18:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.825 18:17:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.825 18:17:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.825 18:17:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.825 18:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:23.825 "name": "raid_bdev1", 00:19:23.825 "uuid": "0b33ebc0-11bf-4a40-a61d-c7abc62b8081", 00:19:23.825 "strip_size_kb": 64, 00:19:23.825 "state": "online", 00:19:23.825 "raid_level": "raid5f", 00:19:23.825 "superblock": true, 00:19:23.825 "num_base_bdevs": 3, 00:19:23.825 "num_base_bdevs_discovered": 2, 00:19:23.825 "num_base_bdevs_operational": 2, 00:19:23.825 "base_bdevs_list": [ 00:19:23.825 { 00:19:23.825 "name": null, 00:19:23.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.825 "is_configured": false, 00:19:23.825 "data_offset": 0, 00:19:23.825 "data_size": 63488 00:19:23.825 }, 00:19:23.825 { 00:19:23.825 "name": "BaseBdev2", 00:19:23.825 "uuid": "9244d5a5-7092-5cf4-a36a-59212587c34d", 00:19:23.825 "is_configured": true, 00:19:23.825 "data_offset": 2048, 00:19:23.825 "data_size": 63488 00:19:23.825 }, 00:19:23.825 { 00:19:23.825 "name": "BaseBdev3", 00:19:23.825 "uuid": "0449e133-e5a5-5a2e-818c-2c947a8eb9f0", 00:19:23.825 "is_configured": true, 00:19:23.825 "data_offset": 2048, 00:19:23.825 "data_size": 63488 00:19:23.825 } 00:19:23.825 ] 00:19:23.825 }' 00:19:23.825 18:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:23.825 18:17:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.393 18:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:24.393 18:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:24.393 18:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:24.393 18:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:24.393 18:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:24.393 18:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.393 18:17:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.393 18:17:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.393 18:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.393 18:17:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.393 18:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:24.393 "name": "raid_bdev1", 00:19:24.393 "uuid": "0b33ebc0-11bf-4a40-a61d-c7abc62b8081", 00:19:24.393 "strip_size_kb": 64, 00:19:24.393 "state": "online", 00:19:24.393 "raid_level": "raid5f", 00:19:24.393 "superblock": true, 00:19:24.393 "num_base_bdevs": 3, 00:19:24.393 "num_base_bdevs_discovered": 2, 00:19:24.393 "num_base_bdevs_operational": 2, 00:19:24.393 "base_bdevs_list": [ 00:19:24.393 { 00:19:24.393 "name": null, 00:19:24.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.393 "is_configured": false, 00:19:24.393 "data_offset": 0, 00:19:24.393 "data_size": 63488 00:19:24.393 }, 00:19:24.393 { 00:19:24.393 "name": "BaseBdev2", 00:19:24.393 "uuid": "9244d5a5-7092-5cf4-a36a-59212587c34d", 00:19:24.393 "is_configured": true, 00:19:24.393 "data_offset": 2048, 00:19:24.393 "data_size": 63488 00:19:24.393 }, 00:19:24.393 { 00:19:24.393 "name": "BaseBdev3", 00:19:24.393 "uuid": "0449e133-e5a5-5a2e-818c-2c947a8eb9f0", 00:19:24.393 "is_configured": true, 00:19:24.393 "data_offset": 2048, 00:19:24.393 "data_size": 63488 00:19:24.393 } 00:19:24.393 ] 00:19:24.393 }' 00:19:24.393 18:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:24.393 18:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:24.393 18:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:24.393 18:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:24.393 18:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:24.393 18:17:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:19:24.393 18:17:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:24.393 18:17:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:24.393 18:17:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:24.393 18:17:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:24.393 18:17:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:24.393 18:17:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:24.393 18:17:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.393 18:17:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.393 [2024-12-06 18:17:49.798602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:24.393 [2024-12-06 18:17:49.798830] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:24.393 [2024-12-06 18:17:49.798856] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:24.393 request: 00:19:24.393 { 00:19:24.393 "base_bdev": "BaseBdev1", 00:19:24.393 "raid_bdev": "raid_bdev1", 00:19:24.393 "method": "bdev_raid_add_base_bdev", 00:19:24.393 "req_id": 1 00:19:24.393 } 00:19:24.393 Got JSON-RPC error response 00:19:24.393 response: 00:19:24.393 { 00:19:24.393 "code": -22, 00:19:24.393 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:24.393 } 00:19:24.393 18:17:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:24.393 18:17:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:19:24.393 18:17:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:24.393 18:17:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:24.393 18:17:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:24.393 18:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:25.329 18:17:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:25.329 18:17:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:25.329 18:17:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:25.329 18:17:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:25.329 18:17:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:25.329 18:17:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:25.329 18:17:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:25.329 18:17:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:25.329 18:17:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:25.329 18:17:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:25.329 18:17:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.329 18:17:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.329 18:17:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.329 18:17:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.329 18:17:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.588 18:17:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:25.588 "name": "raid_bdev1", 00:19:25.588 "uuid": "0b33ebc0-11bf-4a40-a61d-c7abc62b8081", 00:19:25.588 "strip_size_kb": 64, 00:19:25.588 "state": "online", 00:19:25.588 "raid_level": "raid5f", 00:19:25.588 "superblock": true, 00:19:25.588 "num_base_bdevs": 3, 00:19:25.588 "num_base_bdevs_discovered": 2, 00:19:25.588 "num_base_bdevs_operational": 2, 00:19:25.588 "base_bdevs_list": [ 00:19:25.588 { 00:19:25.588 "name": null, 00:19:25.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.588 "is_configured": false, 00:19:25.588 "data_offset": 0, 00:19:25.588 "data_size": 63488 00:19:25.588 }, 00:19:25.588 { 00:19:25.588 "name": "BaseBdev2", 00:19:25.588 "uuid": "9244d5a5-7092-5cf4-a36a-59212587c34d", 00:19:25.588 "is_configured": true, 00:19:25.588 "data_offset": 2048, 00:19:25.588 "data_size": 63488 00:19:25.588 }, 00:19:25.588 { 00:19:25.588 "name": "BaseBdev3", 00:19:25.588 "uuid": "0449e133-e5a5-5a2e-818c-2c947a8eb9f0", 00:19:25.588 "is_configured": true, 00:19:25.588 "data_offset": 2048, 00:19:25.588 "data_size": 63488 00:19:25.588 } 00:19:25.588 ] 00:19:25.588 }' 00:19:25.588 18:17:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:25.588 18:17:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.847 18:17:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:25.847 18:17:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:25.847 18:17:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:25.847 18:17:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:25.847 18:17:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:25.847 18:17:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.847 18:17:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.847 18:17:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.847 18:17:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.847 18:17:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.123 18:17:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:26.123 "name": "raid_bdev1", 00:19:26.123 "uuid": "0b33ebc0-11bf-4a40-a61d-c7abc62b8081", 00:19:26.123 "strip_size_kb": 64, 00:19:26.123 "state": "online", 00:19:26.123 "raid_level": "raid5f", 00:19:26.123 "superblock": true, 00:19:26.123 "num_base_bdevs": 3, 00:19:26.123 "num_base_bdevs_discovered": 2, 00:19:26.123 "num_base_bdevs_operational": 2, 00:19:26.123 "base_bdevs_list": [ 00:19:26.123 { 00:19:26.123 "name": null, 00:19:26.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.123 "is_configured": false, 00:19:26.123 "data_offset": 0, 00:19:26.123 "data_size": 63488 00:19:26.123 }, 00:19:26.123 { 00:19:26.123 "name": "BaseBdev2", 00:19:26.123 "uuid": "9244d5a5-7092-5cf4-a36a-59212587c34d", 00:19:26.123 "is_configured": true, 00:19:26.123 "data_offset": 2048, 00:19:26.123 "data_size": 63488 00:19:26.123 }, 00:19:26.123 { 00:19:26.123 "name": "BaseBdev3", 00:19:26.123 "uuid": "0449e133-e5a5-5a2e-818c-2c947a8eb9f0", 00:19:26.123 "is_configured": true, 00:19:26.123 "data_offset": 2048, 00:19:26.123 "data_size": 63488 00:19:26.123 } 00:19:26.123 ] 00:19:26.123 }' 00:19:26.123 18:17:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:26.123 18:17:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:26.123 18:17:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:26.123 18:17:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:26.123 18:17:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82439 00:19:26.123 18:17:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82439 ']' 00:19:26.123 18:17:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 82439 00:19:26.123 18:17:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:19:26.123 18:17:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:26.123 18:17:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82439 00:19:26.123 killing process with pid 82439 00:19:26.123 Received shutdown signal, test time was about 60.000000 seconds 00:19:26.123 00:19:26.123 Latency(us) 00:19:26.123 [2024-12-06T18:17:51.643Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.123 [2024-12-06T18:17:51.643Z] =================================================================================================================== 00:19:26.123 [2024-12-06T18:17:51.643Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:26.123 18:17:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:26.123 18:17:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:26.123 18:17:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82439' 00:19:26.123 18:17:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 82439 00:19:26.123 [2024-12-06 18:17:51.536333] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:26.123 18:17:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 82439 00:19:26.123 [2024-12-06 18:17:51.536480] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:26.123 [2024-12-06 18:17:51.536567] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:26.123 [2024-12-06 18:17:51.536588] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:26.382 [2024-12-06 18:17:51.896861] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:27.758 18:17:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:19:27.758 00:19:27.758 real 0m25.026s 00:19:27.758 user 0m33.396s 00:19:27.758 sys 0m2.626s 00:19:27.758 18:17:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:27.758 18:17:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.758 ************************************ 00:19:27.758 END TEST raid5f_rebuild_test_sb 00:19:27.758 ************************************ 00:19:27.758 18:17:52 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:19:27.758 18:17:52 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:19:27.758 18:17:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:27.758 18:17:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:27.758 18:17:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:27.758 ************************************ 00:19:27.758 START TEST raid5f_state_function_test 00:19:27.758 ************************************ 00:19:27.758 18:17:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:19:27.758 18:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:19:27.758 18:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:19:27.758 18:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:19:27.758 18:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:27.758 18:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:27.758 18:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:27.758 18:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:27.758 18:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:27.758 18:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:27.758 18:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:27.758 18:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:27.758 18:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:27.758 18:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:19:27.758 18:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:27.758 18:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:27.758 18:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:19:27.758 18:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:27.758 18:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:27.758 18:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:27.758 18:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:27.758 18:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:27.759 18:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:27.759 18:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:27.759 18:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:27.759 18:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:19:27.759 18:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:19:27.759 18:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:19:27.759 18:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:19:27.759 18:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:19:27.759 18:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83204 00:19:27.759 Process raid pid: 83204 00:19:27.759 18:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83204' 00:19:27.759 18:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83204 00:19:27.759 18:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:27.759 18:17:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 83204 ']' 00:19:27.759 18:17:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:27.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:27.759 18:17:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:27.759 18:17:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:27.759 18:17:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:27.759 18:17:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.759 [2024-12-06 18:17:53.104666] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:19:27.759 [2024-12-06 18:17:53.104827] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:28.018 [2024-12-06 18:17:53.281567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.018 [2024-12-06 18:17:53.412384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.277 [2024-12-06 18:17:53.617657] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:28.277 [2024-12-06 18:17:53.617698] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:28.844 18:17:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:28.844 18:17:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:19:28.844 18:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:28.844 18:17:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.844 18:17:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.844 [2024-12-06 18:17:54.144345] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:28.844 [2024-12-06 18:17:54.144426] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:28.844 [2024-12-06 18:17:54.144443] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:28.844 [2024-12-06 18:17:54.144460] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:28.844 [2024-12-06 18:17:54.144470] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:28.844 [2024-12-06 18:17:54.144485] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:28.844 [2024-12-06 18:17:54.144495] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:28.844 [2024-12-06 18:17:54.144511] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:28.844 18:17:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.844 18:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:28.844 18:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:28.844 18:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:28.844 18:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:28.844 18:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:28.844 18:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:28.844 18:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:28.844 18:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:28.844 18:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:28.844 18:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:28.844 18:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.844 18:17:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.844 18:17:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.844 18:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:28.844 18:17:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.844 18:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:28.844 "name": "Existed_Raid", 00:19:28.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.844 "strip_size_kb": 64, 00:19:28.844 "state": "configuring", 00:19:28.844 "raid_level": "raid5f", 00:19:28.844 "superblock": false, 00:19:28.844 "num_base_bdevs": 4, 00:19:28.844 "num_base_bdevs_discovered": 0, 00:19:28.844 "num_base_bdevs_operational": 4, 00:19:28.844 "base_bdevs_list": [ 00:19:28.844 { 00:19:28.844 "name": "BaseBdev1", 00:19:28.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.844 "is_configured": false, 00:19:28.844 "data_offset": 0, 00:19:28.844 "data_size": 0 00:19:28.844 }, 00:19:28.844 { 00:19:28.844 "name": "BaseBdev2", 00:19:28.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.844 "is_configured": false, 00:19:28.844 "data_offset": 0, 00:19:28.844 "data_size": 0 00:19:28.844 }, 00:19:28.844 { 00:19:28.844 "name": "BaseBdev3", 00:19:28.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.844 "is_configured": false, 00:19:28.844 "data_offset": 0, 00:19:28.844 "data_size": 0 00:19:28.844 }, 00:19:28.844 { 00:19:28.845 "name": "BaseBdev4", 00:19:28.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.845 "is_configured": false, 00:19:28.845 "data_offset": 0, 00:19:28.845 "data_size": 0 00:19:28.845 } 00:19:28.845 ] 00:19:28.845 }' 00:19:28.845 18:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:28.845 18:17:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.413 18:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:29.413 18:17:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.413 18:17:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.413 [2024-12-06 18:17:54.648420] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:29.413 [2024-12-06 18:17:54.648471] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:29.413 18:17:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.413 18:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:29.413 18:17:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.413 18:17:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.413 [2024-12-06 18:17:54.656411] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:29.413 [2024-12-06 18:17:54.656463] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:29.413 [2024-12-06 18:17:54.656478] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:29.413 [2024-12-06 18:17:54.656501] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:29.413 [2024-12-06 18:17:54.656512] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:29.413 [2024-12-06 18:17:54.656527] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:29.413 [2024-12-06 18:17:54.656536] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:29.413 [2024-12-06 18:17:54.656550] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:29.413 18:17:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.413 18:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:29.413 18:17:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.413 18:17:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.413 [2024-12-06 18:17:54.701491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:29.413 BaseBdev1 00:19:29.413 18:17:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.413 18:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:29.413 18:17:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:29.413 18:17:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:29.413 18:17:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:29.413 18:17:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:29.413 18:17:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:29.413 18:17:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:29.413 18:17:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.413 18:17:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.413 18:17:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.413 18:17:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:29.413 18:17:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.413 18:17:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.413 [ 00:19:29.413 { 00:19:29.413 "name": "BaseBdev1", 00:19:29.413 "aliases": [ 00:19:29.413 "44709fca-27fe-4a6c-914a-39487339c0ca" 00:19:29.413 ], 00:19:29.413 "product_name": "Malloc disk", 00:19:29.413 "block_size": 512, 00:19:29.413 "num_blocks": 65536, 00:19:29.413 "uuid": "44709fca-27fe-4a6c-914a-39487339c0ca", 00:19:29.413 "assigned_rate_limits": { 00:19:29.413 "rw_ios_per_sec": 0, 00:19:29.413 "rw_mbytes_per_sec": 0, 00:19:29.413 "r_mbytes_per_sec": 0, 00:19:29.413 "w_mbytes_per_sec": 0 00:19:29.413 }, 00:19:29.413 "claimed": true, 00:19:29.413 "claim_type": "exclusive_write", 00:19:29.413 "zoned": false, 00:19:29.413 "supported_io_types": { 00:19:29.413 "read": true, 00:19:29.413 "write": true, 00:19:29.413 "unmap": true, 00:19:29.413 "flush": true, 00:19:29.413 "reset": true, 00:19:29.413 "nvme_admin": false, 00:19:29.413 "nvme_io": false, 00:19:29.413 "nvme_io_md": false, 00:19:29.413 "write_zeroes": true, 00:19:29.413 "zcopy": true, 00:19:29.413 "get_zone_info": false, 00:19:29.413 "zone_management": false, 00:19:29.413 "zone_append": false, 00:19:29.413 "compare": false, 00:19:29.413 "compare_and_write": false, 00:19:29.413 "abort": true, 00:19:29.413 "seek_hole": false, 00:19:29.413 "seek_data": false, 00:19:29.413 "copy": true, 00:19:29.413 "nvme_iov_md": false 00:19:29.413 }, 00:19:29.413 "memory_domains": [ 00:19:29.413 { 00:19:29.413 "dma_device_id": "system", 00:19:29.413 "dma_device_type": 1 00:19:29.413 }, 00:19:29.413 { 00:19:29.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:29.413 "dma_device_type": 2 00:19:29.413 } 00:19:29.413 ], 00:19:29.413 "driver_specific": {} 00:19:29.413 } 00:19:29.413 ] 00:19:29.413 18:17:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.413 18:17:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:29.413 18:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:29.413 18:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:29.413 18:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:29.413 18:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:29.413 18:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:29.413 18:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:29.413 18:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:29.413 18:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:29.413 18:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:29.413 18:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:29.413 18:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.413 18:17:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.413 18:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:29.413 18:17:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.413 18:17:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.413 18:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:29.413 "name": "Existed_Raid", 00:19:29.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.413 "strip_size_kb": 64, 00:19:29.413 "state": "configuring", 00:19:29.413 "raid_level": "raid5f", 00:19:29.413 "superblock": false, 00:19:29.413 "num_base_bdevs": 4, 00:19:29.413 "num_base_bdevs_discovered": 1, 00:19:29.413 "num_base_bdevs_operational": 4, 00:19:29.413 "base_bdevs_list": [ 00:19:29.413 { 00:19:29.413 "name": "BaseBdev1", 00:19:29.413 "uuid": "44709fca-27fe-4a6c-914a-39487339c0ca", 00:19:29.413 "is_configured": true, 00:19:29.413 "data_offset": 0, 00:19:29.414 "data_size": 65536 00:19:29.414 }, 00:19:29.414 { 00:19:29.414 "name": "BaseBdev2", 00:19:29.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.414 "is_configured": false, 00:19:29.414 "data_offset": 0, 00:19:29.414 "data_size": 0 00:19:29.414 }, 00:19:29.414 { 00:19:29.414 "name": "BaseBdev3", 00:19:29.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.414 "is_configured": false, 00:19:29.414 "data_offset": 0, 00:19:29.414 "data_size": 0 00:19:29.414 }, 00:19:29.414 { 00:19:29.414 "name": "BaseBdev4", 00:19:29.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.414 "is_configured": false, 00:19:29.414 "data_offset": 0, 00:19:29.414 "data_size": 0 00:19:29.414 } 00:19:29.414 ] 00:19:29.414 }' 00:19:29.414 18:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:29.414 18:17:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.981 18:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:29.981 18:17:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.981 18:17:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.981 [2024-12-06 18:17:55.241699] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:29.981 [2024-12-06 18:17:55.241780] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:29.981 18:17:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.981 18:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:29.981 18:17:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.981 18:17:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.981 [2024-12-06 18:17:55.249749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:29.981 [2024-12-06 18:17:55.252193] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:29.981 [2024-12-06 18:17:55.252253] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:29.981 [2024-12-06 18:17:55.252270] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:29.981 [2024-12-06 18:17:55.252287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:29.981 [2024-12-06 18:17:55.252298] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:29.981 [2024-12-06 18:17:55.252311] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:29.981 18:17:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.981 18:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:29.981 18:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:29.981 18:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:29.981 18:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:29.981 18:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:29.981 18:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:29.981 18:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:29.981 18:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:29.981 18:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:29.981 18:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:29.981 18:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:29.981 18:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:29.981 18:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:29.981 18:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.981 18:17:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.981 18:17:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.981 18:17:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.981 18:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:29.981 "name": "Existed_Raid", 00:19:29.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.981 "strip_size_kb": 64, 00:19:29.981 "state": "configuring", 00:19:29.981 "raid_level": "raid5f", 00:19:29.981 "superblock": false, 00:19:29.981 "num_base_bdevs": 4, 00:19:29.981 "num_base_bdevs_discovered": 1, 00:19:29.981 "num_base_bdevs_operational": 4, 00:19:29.981 "base_bdevs_list": [ 00:19:29.981 { 00:19:29.981 "name": "BaseBdev1", 00:19:29.981 "uuid": "44709fca-27fe-4a6c-914a-39487339c0ca", 00:19:29.981 "is_configured": true, 00:19:29.981 "data_offset": 0, 00:19:29.981 "data_size": 65536 00:19:29.981 }, 00:19:29.981 { 00:19:29.981 "name": "BaseBdev2", 00:19:29.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.981 "is_configured": false, 00:19:29.981 "data_offset": 0, 00:19:29.981 "data_size": 0 00:19:29.981 }, 00:19:29.981 { 00:19:29.981 "name": "BaseBdev3", 00:19:29.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.981 "is_configured": false, 00:19:29.981 "data_offset": 0, 00:19:29.981 "data_size": 0 00:19:29.981 }, 00:19:29.981 { 00:19:29.981 "name": "BaseBdev4", 00:19:29.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.981 "is_configured": false, 00:19:29.981 "data_offset": 0, 00:19:29.981 "data_size": 0 00:19:29.981 } 00:19:29.981 ] 00:19:29.981 }' 00:19:29.981 18:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:29.981 18:17:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.238 18:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:30.238 18:17:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.238 18:17:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.497 [2024-12-06 18:17:55.782357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:30.497 BaseBdev2 00:19:30.497 18:17:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.497 18:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:30.497 18:17:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:30.497 18:17:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:30.497 18:17:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:30.497 18:17:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:30.497 18:17:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:30.497 18:17:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:30.497 18:17:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.497 18:17:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.497 18:17:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.497 18:17:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:30.497 18:17:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.497 18:17:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.497 [ 00:19:30.497 { 00:19:30.497 "name": "BaseBdev2", 00:19:30.497 "aliases": [ 00:19:30.497 "08a917f9-41b2-4386-965a-ba1b95ee81a2" 00:19:30.497 ], 00:19:30.497 "product_name": "Malloc disk", 00:19:30.497 "block_size": 512, 00:19:30.497 "num_blocks": 65536, 00:19:30.497 "uuid": "08a917f9-41b2-4386-965a-ba1b95ee81a2", 00:19:30.497 "assigned_rate_limits": { 00:19:30.497 "rw_ios_per_sec": 0, 00:19:30.497 "rw_mbytes_per_sec": 0, 00:19:30.497 "r_mbytes_per_sec": 0, 00:19:30.497 "w_mbytes_per_sec": 0 00:19:30.497 }, 00:19:30.497 "claimed": true, 00:19:30.497 "claim_type": "exclusive_write", 00:19:30.497 "zoned": false, 00:19:30.497 "supported_io_types": { 00:19:30.497 "read": true, 00:19:30.497 "write": true, 00:19:30.497 "unmap": true, 00:19:30.497 "flush": true, 00:19:30.497 "reset": true, 00:19:30.497 "nvme_admin": false, 00:19:30.497 "nvme_io": false, 00:19:30.497 "nvme_io_md": false, 00:19:30.497 "write_zeroes": true, 00:19:30.497 "zcopy": true, 00:19:30.497 "get_zone_info": false, 00:19:30.497 "zone_management": false, 00:19:30.497 "zone_append": false, 00:19:30.497 "compare": false, 00:19:30.497 "compare_and_write": false, 00:19:30.497 "abort": true, 00:19:30.497 "seek_hole": false, 00:19:30.497 "seek_data": false, 00:19:30.497 "copy": true, 00:19:30.497 "nvme_iov_md": false 00:19:30.497 }, 00:19:30.497 "memory_domains": [ 00:19:30.497 { 00:19:30.497 "dma_device_id": "system", 00:19:30.497 "dma_device_type": 1 00:19:30.497 }, 00:19:30.497 { 00:19:30.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.497 "dma_device_type": 2 00:19:30.497 } 00:19:30.497 ], 00:19:30.497 "driver_specific": {} 00:19:30.497 } 00:19:30.497 ] 00:19:30.497 18:17:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.497 18:17:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:30.497 18:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:30.498 18:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:30.498 18:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:30.498 18:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:30.498 18:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:30.498 18:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:30.498 18:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:30.498 18:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:30.498 18:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:30.498 18:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:30.498 18:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:30.498 18:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:30.498 18:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.498 18:17:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.498 18:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:30.498 18:17:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.498 18:17:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.498 18:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:30.498 "name": "Existed_Raid", 00:19:30.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.498 "strip_size_kb": 64, 00:19:30.498 "state": "configuring", 00:19:30.498 "raid_level": "raid5f", 00:19:30.498 "superblock": false, 00:19:30.498 "num_base_bdevs": 4, 00:19:30.498 "num_base_bdevs_discovered": 2, 00:19:30.498 "num_base_bdevs_operational": 4, 00:19:30.498 "base_bdevs_list": [ 00:19:30.498 { 00:19:30.498 "name": "BaseBdev1", 00:19:30.498 "uuid": "44709fca-27fe-4a6c-914a-39487339c0ca", 00:19:30.498 "is_configured": true, 00:19:30.498 "data_offset": 0, 00:19:30.498 "data_size": 65536 00:19:30.498 }, 00:19:30.498 { 00:19:30.498 "name": "BaseBdev2", 00:19:30.498 "uuid": "08a917f9-41b2-4386-965a-ba1b95ee81a2", 00:19:30.498 "is_configured": true, 00:19:30.498 "data_offset": 0, 00:19:30.498 "data_size": 65536 00:19:30.498 }, 00:19:30.498 { 00:19:30.498 "name": "BaseBdev3", 00:19:30.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.498 "is_configured": false, 00:19:30.498 "data_offset": 0, 00:19:30.498 "data_size": 0 00:19:30.498 }, 00:19:30.498 { 00:19:30.498 "name": "BaseBdev4", 00:19:30.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.498 "is_configured": false, 00:19:30.498 "data_offset": 0, 00:19:30.498 "data_size": 0 00:19:30.498 } 00:19:30.498 ] 00:19:30.498 }' 00:19:30.498 18:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:30.498 18:17:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.064 18:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:31.064 18:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.064 18:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.064 [2024-12-06 18:17:56.379228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:31.064 BaseBdev3 00:19:31.064 18:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.064 18:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:19:31.064 18:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:19:31.064 18:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:31.064 18:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:31.064 18:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:31.064 18:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:31.064 18:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:31.064 18:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.064 18:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.064 18:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.064 18:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:31.064 18:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.064 18:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.064 [ 00:19:31.064 { 00:19:31.064 "name": "BaseBdev3", 00:19:31.065 "aliases": [ 00:19:31.065 "6e077665-6a98-4ea6-a577-6cae3df37ef8" 00:19:31.065 ], 00:19:31.065 "product_name": "Malloc disk", 00:19:31.065 "block_size": 512, 00:19:31.065 "num_blocks": 65536, 00:19:31.065 "uuid": "6e077665-6a98-4ea6-a577-6cae3df37ef8", 00:19:31.065 "assigned_rate_limits": { 00:19:31.065 "rw_ios_per_sec": 0, 00:19:31.065 "rw_mbytes_per_sec": 0, 00:19:31.065 "r_mbytes_per_sec": 0, 00:19:31.065 "w_mbytes_per_sec": 0 00:19:31.065 }, 00:19:31.065 "claimed": true, 00:19:31.065 "claim_type": "exclusive_write", 00:19:31.065 "zoned": false, 00:19:31.065 "supported_io_types": { 00:19:31.065 "read": true, 00:19:31.065 "write": true, 00:19:31.065 "unmap": true, 00:19:31.065 "flush": true, 00:19:31.065 "reset": true, 00:19:31.065 "nvme_admin": false, 00:19:31.065 "nvme_io": false, 00:19:31.065 "nvme_io_md": false, 00:19:31.065 "write_zeroes": true, 00:19:31.065 "zcopy": true, 00:19:31.065 "get_zone_info": false, 00:19:31.065 "zone_management": false, 00:19:31.065 "zone_append": false, 00:19:31.065 "compare": false, 00:19:31.065 "compare_and_write": false, 00:19:31.065 "abort": true, 00:19:31.065 "seek_hole": false, 00:19:31.065 "seek_data": false, 00:19:31.065 "copy": true, 00:19:31.065 "nvme_iov_md": false 00:19:31.065 }, 00:19:31.065 "memory_domains": [ 00:19:31.065 { 00:19:31.065 "dma_device_id": "system", 00:19:31.065 "dma_device_type": 1 00:19:31.065 }, 00:19:31.065 { 00:19:31.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.065 "dma_device_type": 2 00:19:31.065 } 00:19:31.065 ], 00:19:31.065 "driver_specific": {} 00:19:31.065 } 00:19:31.065 ] 00:19:31.065 18:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.065 18:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:31.065 18:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:31.065 18:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:31.065 18:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:31.065 18:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:31.065 18:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:31.065 18:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:31.065 18:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:31.065 18:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:31.065 18:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:31.065 18:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:31.065 18:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:31.065 18:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:31.065 18:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.065 18:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:31.065 18:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.065 18:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.065 18:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.065 18:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:31.065 "name": "Existed_Raid", 00:19:31.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.065 "strip_size_kb": 64, 00:19:31.065 "state": "configuring", 00:19:31.065 "raid_level": "raid5f", 00:19:31.065 "superblock": false, 00:19:31.065 "num_base_bdevs": 4, 00:19:31.065 "num_base_bdevs_discovered": 3, 00:19:31.065 "num_base_bdevs_operational": 4, 00:19:31.065 "base_bdevs_list": [ 00:19:31.065 { 00:19:31.065 "name": "BaseBdev1", 00:19:31.065 "uuid": "44709fca-27fe-4a6c-914a-39487339c0ca", 00:19:31.065 "is_configured": true, 00:19:31.065 "data_offset": 0, 00:19:31.065 "data_size": 65536 00:19:31.065 }, 00:19:31.065 { 00:19:31.065 "name": "BaseBdev2", 00:19:31.065 "uuid": "08a917f9-41b2-4386-965a-ba1b95ee81a2", 00:19:31.065 "is_configured": true, 00:19:31.065 "data_offset": 0, 00:19:31.065 "data_size": 65536 00:19:31.065 }, 00:19:31.065 { 00:19:31.065 "name": "BaseBdev3", 00:19:31.065 "uuid": "6e077665-6a98-4ea6-a577-6cae3df37ef8", 00:19:31.065 "is_configured": true, 00:19:31.065 "data_offset": 0, 00:19:31.065 "data_size": 65536 00:19:31.065 }, 00:19:31.065 { 00:19:31.065 "name": "BaseBdev4", 00:19:31.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.065 "is_configured": false, 00:19:31.065 "data_offset": 0, 00:19:31.065 "data_size": 0 00:19:31.065 } 00:19:31.065 ] 00:19:31.065 }' 00:19:31.065 18:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:31.065 18:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.631 18:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:19:31.631 18:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.631 18:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.631 [2024-12-06 18:17:56.967254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:31.631 [2024-12-06 18:17:56.967351] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:31.631 [2024-12-06 18:17:56.967366] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:19:31.631 [2024-12-06 18:17:56.967706] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:31.631 [2024-12-06 18:17:56.974767] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:31.631 [2024-12-06 18:17:56.974811] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:31.631 [2024-12-06 18:17:56.975141] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:31.631 BaseBdev4 00:19:31.631 18:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.631 18:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:19:31.631 18:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:19:31.631 18:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:31.631 18:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:31.631 18:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:31.631 18:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:31.631 18:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:31.631 18:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.631 18:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.631 18:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.631 18:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:31.631 18:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.631 18:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.631 [ 00:19:31.631 { 00:19:31.631 "name": "BaseBdev4", 00:19:31.631 "aliases": [ 00:19:31.631 "f0f97a19-ce5e-4f7b-9358-aff484de7ed0" 00:19:31.631 ], 00:19:31.631 "product_name": "Malloc disk", 00:19:31.631 "block_size": 512, 00:19:31.631 "num_blocks": 65536, 00:19:31.631 "uuid": "f0f97a19-ce5e-4f7b-9358-aff484de7ed0", 00:19:31.631 "assigned_rate_limits": { 00:19:31.631 "rw_ios_per_sec": 0, 00:19:31.631 "rw_mbytes_per_sec": 0, 00:19:31.631 "r_mbytes_per_sec": 0, 00:19:31.631 "w_mbytes_per_sec": 0 00:19:31.631 }, 00:19:31.631 "claimed": true, 00:19:31.631 "claim_type": "exclusive_write", 00:19:31.631 "zoned": false, 00:19:31.631 "supported_io_types": { 00:19:31.631 "read": true, 00:19:31.631 "write": true, 00:19:31.631 "unmap": true, 00:19:31.631 "flush": true, 00:19:31.631 "reset": true, 00:19:31.631 "nvme_admin": false, 00:19:31.631 "nvme_io": false, 00:19:31.631 "nvme_io_md": false, 00:19:31.631 "write_zeroes": true, 00:19:31.631 "zcopy": true, 00:19:31.631 "get_zone_info": false, 00:19:31.631 "zone_management": false, 00:19:31.631 "zone_append": false, 00:19:31.631 "compare": false, 00:19:31.631 "compare_and_write": false, 00:19:31.631 "abort": true, 00:19:31.631 "seek_hole": false, 00:19:31.631 "seek_data": false, 00:19:31.631 "copy": true, 00:19:31.631 "nvme_iov_md": false 00:19:31.631 }, 00:19:31.631 "memory_domains": [ 00:19:31.631 { 00:19:31.631 "dma_device_id": "system", 00:19:31.631 "dma_device_type": 1 00:19:31.631 }, 00:19:31.631 { 00:19:31.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.631 "dma_device_type": 2 00:19:31.631 } 00:19:31.631 ], 00:19:31.631 "driver_specific": {} 00:19:31.631 } 00:19:31.631 ] 00:19:31.631 18:17:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.631 18:17:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:31.631 18:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:31.631 18:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:31.631 18:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:19:31.632 18:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:31.632 18:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:31.632 18:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:31.632 18:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:31.632 18:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:31.632 18:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:31.632 18:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:31.632 18:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:31.632 18:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:31.632 18:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.632 18:17:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.632 18:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:31.632 18:17:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.632 18:17:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.632 18:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:31.632 "name": "Existed_Raid", 00:19:31.632 "uuid": "de19fb52-f9a1-4fe0-9279-fa14b8e6a80a", 00:19:31.632 "strip_size_kb": 64, 00:19:31.632 "state": "online", 00:19:31.632 "raid_level": "raid5f", 00:19:31.632 "superblock": false, 00:19:31.632 "num_base_bdevs": 4, 00:19:31.632 "num_base_bdevs_discovered": 4, 00:19:31.632 "num_base_bdevs_operational": 4, 00:19:31.632 "base_bdevs_list": [ 00:19:31.632 { 00:19:31.632 "name": "BaseBdev1", 00:19:31.632 "uuid": "44709fca-27fe-4a6c-914a-39487339c0ca", 00:19:31.632 "is_configured": true, 00:19:31.632 "data_offset": 0, 00:19:31.632 "data_size": 65536 00:19:31.632 }, 00:19:31.632 { 00:19:31.632 "name": "BaseBdev2", 00:19:31.632 "uuid": "08a917f9-41b2-4386-965a-ba1b95ee81a2", 00:19:31.632 "is_configured": true, 00:19:31.632 "data_offset": 0, 00:19:31.632 "data_size": 65536 00:19:31.632 }, 00:19:31.632 { 00:19:31.632 "name": "BaseBdev3", 00:19:31.632 "uuid": "6e077665-6a98-4ea6-a577-6cae3df37ef8", 00:19:31.632 "is_configured": true, 00:19:31.632 "data_offset": 0, 00:19:31.632 "data_size": 65536 00:19:31.632 }, 00:19:31.632 { 00:19:31.632 "name": "BaseBdev4", 00:19:31.632 "uuid": "f0f97a19-ce5e-4f7b-9358-aff484de7ed0", 00:19:31.632 "is_configured": true, 00:19:31.632 "data_offset": 0, 00:19:31.632 "data_size": 65536 00:19:31.632 } 00:19:31.632 ] 00:19:31.632 }' 00:19:31.632 18:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:31.632 18:17:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.254 18:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:32.254 18:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:32.254 18:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:32.254 18:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:32.254 18:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:32.254 18:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:32.254 18:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:32.254 18:17:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.254 18:17:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.254 18:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:32.254 [2024-12-06 18:17:57.538869] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:32.254 18:17:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.254 18:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:32.254 "name": "Existed_Raid", 00:19:32.254 "aliases": [ 00:19:32.254 "de19fb52-f9a1-4fe0-9279-fa14b8e6a80a" 00:19:32.254 ], 00:19:32.254 "product_name": "Raid Volume", 00:19:32.254 "block_size": 512, 00:19:32.254 "num_blocks": 196608, 00:19:32.254 "uuid": "de19fb52-f9a1-4fe0-9279-fa14b8e6a80a", 00:19:32.254 "assigned_rate_limits": { 00:19:32.254 "rw_ios_per_sec": 0, 00:19:32.254 "rw_mbytes_per_sec": 0, 00:19:32.254 "r_mbytes_per_sec": 0, 00:19:32.254 "w_mbytes_per_sec": 0 00:19:32.254 }, 00:19:32.254 "claimed": false, 00:19:32.254 "zoned": false, 00:19:32.254 "supported_io_types": { 00:19:32.254 "read": true, 00:19:32.254 "write": true, 00:19:32.255 "unmap": false, 00:19:32.255 "flush": false, 00:19:32.255 "reset": true, 00:19:32.255 "nvme_admin": false, 00:19:32.255 "nvme_io": false, 00:19:32.255 "nvme_io_md": false, 00:19:32.255 "write_zeroes": true, 00:19:32.255 "zcopy": false, 00:19:32.255 "get_zone_info": false, 00:19:32.255 "zone_management": false, 00:19:32.255 "zone_append": false, 00:19:32.255 "compare": false, 00:19:32.255 "compare_and_write": false, 00:19:32.255 "abort": false, 00:19:32.255 "seek_hole": false, 00:19:32.255 "seek_data": false, 00:19:32.255 "copy": false, 00:19:32.255 "nvme_iov_md": false 00:19:32.255 }, 00:19:32.255 "driver_specific": { 00:19:32.255 "raid": { 00:19:32.255 "uuid": "de19fb52-f9a1-4fe0-9279-fa14b8e6a80a", 00:19:32.255 "strip_size_kb": 64, 00:19:32.255 "state": "online", 00:19:32.255 "raid_level": "raid5f", 00:19:32.255 "superblock": false, 00:19:32.255 "num_base_bdevs": 4, 00:19:32.255 "num_base_bdevs_discovered": 4, 00:19:32.255 "num_base_bdevs_operational": 4, 00:19:32.255 "base_bdevs_list": [ 00:19:32.255 { 00:19:32.255 "name": "BaseBdev1", 00:19:32.255 "uuid": "44709fca-27fe-4a6c-914a-39487339c0ca", 00:19:32.255 "is_configured": true, 00:19:32.255 "data_offset": 0, 00:19:32.255 "data_size": 65536 00:19:32.255 }, 00:19:32.255 { 00:19:32.255 "name": "BaseBdev2", 00:19:32.255 "uuid": "08a917f9-41b2-4386-965a-ba1b95ee81a2", 00:19:32.255 "is_configured": true, 00:19:32.255 "data_offset": 0, 00:19:32.255 "data_size": 65536 00:19:32.255 }, 00:19:32.255 { 00:19:32.255 "name": "BaseBdev3", 00:19:32.255 "uuid": "6e077665-6a98-4ea6-a577-6cae3df37ef8", 00:19:32.255 "is_configured": true, 00:19:32.255 "data_offset": 0, 00:19:32.255 "data_size": 65536 00:19:32.255 }, 00:19:32.255 { 00:19:32.255 "name": "BaseBdev4", 00:19:32.255 "uuid": "f0f97a19-ce5e-4f7b-9358-aff484de7ed0", 00:19:32.255 "is_configured": true, 00:19:32.255 "data_offset": 0, 00:19:32.255 "data_size": 65536 00:19:32.255 } 00:19:32.255 ] 00:19:32.255 } 00:19:32.255 } 00:19:32.255 }' 00:19:32.255 18:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:32.255 18:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:32.255 BaseBdev2 00:19:32.255 BaseBdev3 00:19:32.255 BaseBdev4' 00:19:32.255 18:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:32.255 18:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:32.255 18:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:32.255 18:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:32.255 18:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:32.255 18:17:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.255 18:17:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.255 18:17:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.255 18:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:32.255 18:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:32.255 18:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:32.255 18:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:32.255 18:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:32.255 18:17:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.255 18:17:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.522 18:17:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.522 18:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:32.522 18:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:32.522 18:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:32.522 18:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:32.522 18:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:32.522 18:17:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.522 18:17:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.522 18:17:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.522 18:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:32.522 18:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:32.522 18:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:32.522 18:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:19:32.522 18:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:32.522 18:17:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.522 18:17:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.522 18:17:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.522 18:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:32.522 18:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:32.522 18:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:32.522 18:17:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.522 18:17:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.522 [2024-12-06 18:17:57.926819] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:32.522 18:17:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.522 18:17:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:32.522 18:17:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:19:32.522 18:17:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:32.522 18:17:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:19:32.522 18:17:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:32.522 18:17:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:19:32.522 18:17:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:32.522 18:17:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:32.522 18:17:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:32.522 18:17:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:32.522 18:17:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:32.522 18:17:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:32.522 18:17:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:32.522 18:17:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:32.522 18:17:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:32.522 18:17:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.522 18:17:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:32.522 18:17:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.522 18:17:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.782 18:17:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.782 18:17:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:32.782 "name": "Existed_Raid", 00:19:32.782 "uuid": "de19fb52-f9a1-4fe0-9279-fa14b8e6a80a", 00:19:32.782 "strip_size_kb": 64, 00:19:32.782 "state": "online", 00:19:32.782 "raid_level": "raid5f", 00:19:32.782 "superblock": false, 00:19:32.782 "num_base_bdevs": 4, 00:19:32.782 "num_base_bdevs_discovered": 3, 00:19:32.782 "num_base_bdevs_operational": 3, 00:19:32.782 "base_bdevs_list": [ 00:19:32.782 { 00:19:32.782 "name": null, 00:19:32.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.782 "is_configured": false, 00:19:32.782 "data_offset": 0, 00:19:32.782 "data_size": 65536 00:19:32.782 }, 00:19:32.782 { 00:19:32.782 "name": "BaseBdev2", 00:19:32.782 "uuid": "08a917f9-41b2-4386-965a-ba1b95ee81a2", 00:19:32.782 "is_configured": true, 00:19:32.782 "data_offset": 0, 00:19:32.782 "data_size": 65536 00:19:32.782 }, 00:19:32.782 { 00:19:32.782 "name": "BaseBdev3", 00:19:32.782 "uuid": "6e077665-6a98-4ea6-a577-6cae3df37ef8", 00:19:32.782 "is_configured": true, 00:19:32.782 "data_offset": 0, 00:19:32.782 "data_size": 65536 00:19:32.782 }, 00:19:32.782 { 00:19:32.782 "name": "BaseBdev4", 00:19:32.782 "uuid": "f0f97a19-ce5e-4f7b-9358-aff484de7ed0", 00:19:32.782 "is_configured": true, 00:19:32.782 "data_offset": 0, 00:19:32.782 "data_size": 65536 00:19:32.782 } 00:19:32.782 ] 00:19:32.782 }' 00:19:32.782 18:17:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:32.782 18:17:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.040 18:17:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:33.040 18:17:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:33.040 18:17:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.040 18:17:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:33.040 18:17:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.040 18:17:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.040 18:17:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.298 18:17:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:33.298 18:17:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:33.298 18:17:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:33.298 18:17:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.298 18:17:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.298 [2024-12-06 18:17:58.590009] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:33.298 [2024-12-06 18:17:58.590287] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:33.298 [2024-12-06 18:17:58.677543] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:33.298 18:17:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.298 18:17:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:33.298 18:17:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:33.298 18:17:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.298 18:17:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:33.298 18:17:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.298 18:17:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.298 18:17:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.298 18:17:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:33.298 18:17:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:33.298 18:17:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:19:33.298 18:17:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.298 18:17:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.298 [2024-12-06 18:17:58.753566] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:33.557 18:17:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.557 18:17:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:33.557 18:17:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:33.557 18:17:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.557 18:17:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:33.557 18:17:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.557 18:17:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.557 18:17:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.557 18:17:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:33.557 18:17:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:33.557 18:17:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:19:33.557 18:17:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.557 18:17:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.557 [2024-12-06 18:17:58.901226] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:33.557 [2024-12-06 18:17:58.901292] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:33.571 18:17:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.571 18:17:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:33.571 18:17:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:33.571 18:17:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:33.572 18:17:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.572 18:17:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.572 18:17:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.572 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.572 18:17:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:33.572 18:17:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:33.572 18:17:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:19:33.572 18:17:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:19:33.572 18:17:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:33.572 18:17:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:33.572 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.572 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.832 BaseBdev2 00:19:33.832 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.832 18:17:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:19:33.832 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:33.832 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:33.832 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:33.833 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:33.833 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:33.833 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:33.833 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.833 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.833 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.833 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:33.833 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.833 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.833 [ 00:19:33.833 { 00:19:33.833 "name": "BaseBdev2", 00:19:33.833 "aliases": [ 00:19:33.833 "5a2aeb08-c1de-4f2e-8959-9a3d95a4d812" 00:19:33.833 ], 00:19:33.833 "product_name": "Malloc disk", 00:19:33.833 "block_size": 512, 00:19:33.833 "num_blocks": 65536, 00:19:33.833 "uuid": "5a2aeb08-c1de-4f2e-8959-9a3d95a4d812", 00:19:33.833 "assigned_rate_limits": { 00:19:33.833 "rw_ios_per_sec": 0, 00:19:33.833 "rw_mbytes_per_sec": 0, 00:19:33.833 "r_mbytes_per_sec": 0, 00:19:33.833 "w_mbytes_per_sec": 0 00:19:33.833 }, 00:19:33.833 "claimed": false, 00:19:33.833 "zoned": false, 00:19:33.833 "supported_io_types": { 00:19:33.833 "read": true, 00:19:33.833 "write": true, 00:19:33.833 "unmap": true, 00:19:33.833 "flush": true, 00:19:33.833 "reset": true, 00:19:33.833 "nvme_admin": false, 00:19:33.833 "nvme_io": false, 00:19:33.833 "nvme_io_md": false, 00:19:33.833 "write_zeroes": true, 00:19:33.833 "zcopy": true, 00:19:33.833 "get_zone_info": false, 00:19:33.833 "zone_management": false, 00:19:33.833 "zone_append": false, 00:19:33.833 "compare": false, 00:19:33.833 "compare_and_write": false, 00:19:33.833 "abort": true, 00:19:33.833 "seek_hole": false, 00:19:33.833 "seek_data": false, 00:19:33.833 "copy": true, 00:19:33.833 "nvme_iov_md": false 00:19:33.833 }, 00:19:33.833 "memory_domains": [ 00:19:33.833 { 00:19:33.833 "dma_device_id": "system", 00:19:33.833 "dma_device_type": 1 00:19:33.833 }, 00:19:33.833 { 00:19:33.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:33.833 "dma_device_type": 2 00:19:33.833 } 00:19:33.833 ], 00:19:33.833 "driver_specific": {} 00:19:33.833 } 00:19:33.833 ] 00:19:33.833 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.833 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:33.833 18:17:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:33.833 18:17:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:33.833 18:17:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:33.833 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.833 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.833 BaseBdev3 00:19:33.833 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.833 18:17:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:19:33.833 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:19:33.833 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:33.833 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:33.833 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:33.833 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:33.833 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:33.833 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.833 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.833 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.833 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:33.833 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.833 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.833 [ 00:19:33.833 { 00:19:33.833 "name": "BaseBdev3", 00:19:33.833 "aliases": [ 00:19:33.833 "d02df123-0c81-4d3c-a08f-0c0e096335d7" 00:19:33.833 ], 00:19:33.833 "product_name": "Malloc disk", 00:19:33.833 "block_size": 512, 00:19:33.833 "num_blocks": 65536, 00:19:33.833 "uuid": "d02df123-0c81-4d3c-a08f-0c0e096335d7", 00:19:33.833 "assigned_rate_limits": { 00:19:33.833 "rw_ios_per_sec": 0, 00:19:33.833 "rw_mbytes_per_sec": 0, 00:19:33.833 "r_mbytes_per_sec": 0, 00:19:33.833 "w_mbytes_per_sec": 0 00:19:33.833 }, 00:19:33.833 "claimed": false, 00:19:33.833 "zoned": false, 00:19:33.833 "supported_io_types": { 00:19:33.833 "read": true, 00:19:33.833 "write": true, 00:19:33.833 "unmap": true, 00:19:33.833 "flush": true, 00:19:33.833 "reset": true, 00:19:33.833 "nvme_admin": false, 00:19:33.833 "nvme_io": false, 00:19:33.833 "nvme_io_md": false, 00:19:33.833 "write_zeroes": true, 00:19:33.833 "zcopy": true, 00:19:33.833 "get_zone_info": false, 00:19:33.833 "zone_management": false, 00:19:33.833 "zone_append": false, 00:19:33.833 "compare": false, 00:19:33.833 "compare_and_write": false, 00:19:33.833 "abort": true, 00:19:33.833 "seek_hole": false, 00:19:33.833 "seek_data": false, 00:19:33.833 "copy": true, 00:19:33.833 "nvme_iov_md": false 00:19:33.833 }, 00:19:33.833 "memory_domains": [ 00:19:33.833 { 00:19:33.833 "dma_device_id": "system", 00:19:33.833 "dma_device_type": 1 00:19:33.833 }, 00:19:33.833 { 00:19:33.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:33.833 "dma_device_type": 2 00:19:33.833 } 00:19:33.833 ], 00:19:33.833 "driver_specific": {} 00:19:33.833 } 00:19:33.833 ] 00:19:33.833 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.833 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:33.833 18:17:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:33.833 18:17:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:33.833 18:17:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:19:33.834 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.834 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.834 BaseBdev4 00:19:33.834 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.834 18:17:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:19:33.834 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:19:33.834 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:33.834 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:33.834 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:33.834 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:33.834 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:33.834 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.834 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.834 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.834 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:33.834 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.834 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.834 [ 00:19:33.834 { 00:19:33.834 "name": "BaseBdev4", 00:19:33.834 "aliases": [ 00:19:33.834 "8fa11999-330a-4c21-beea-d49d512abd8f" 00:19:33.834 ], 00:19:33.834 "product_name": "Malloc disk", 00:19:33.834 "block_size": 512, 00:19:33.834 "num_blocks": 65536, 00:19:33.834 "uuid": "8fa11999-330a-4c21-beea-d49d512abd8f", 00:19:33.834 "assigned_rate_limits": { 00:19:33.834 "rw_ios_per_sec": 0, 00:19:33.834 "rw_mbytes_per_sec": 0, 00:19:33.834 "r_mbytes_per_sec": 0, 00:19:33.834 "w_mbytes_per_sec": 0 00:19:33.834 }, 00:19:33.834 "claimed": false, 00:19:33.834 "zoned": false, 00:19:33.834 "supported_io_types": { 00:19:33.834 "read": true, 00:19:33.834 "write": true, 00:19:33.834 "unmap": true, 00:19:33.834 "flush": true, 00:19:33.834 "reset": true, 00:19:33.834 "nvme_admin": false, 00:19:33.834 "nvme_io": false, 00:19:33.834 "nvme_io_md": false, 00:19:33.834 "write_zeroes": true, 00:19:33.834 "zcopy": true, 00:19:33.834 "get_zone_info": false, 00:19:33.834 "zone_management": false, 00:19:33.834 "zone_append": false, 00:19:33.834 "compare": false, 00:19:33.834 "compare_and_write": false, 00:19:33.834 "abort": true, 00:19:33.834 "seek_hole": false, 00:19:33.834 "seek_data": false, 00:19:33.834 "copy": true, 00:19:33.834 "nvme_iov_md": false 00:19:33.834 }, 00:19:33.834 "memory_domains": [ 00:19:33.834 { 00:19:33.834 "dma_device_id": "system", 00:19:33.834 "dma_device_type": 1 00:19:33.834 }, 00:19:33.834 { 00:19:33.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:33.834 "dma_device_type": 2 00:19:33.834 } 00:19:33.834 ], 00:19:33.834 "driver_specific": {} 00:19:33.834 } 00:19:33.834 ] 00:19:33.834 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.834 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:33.834 18:17:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:33.834 18:17:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:33.834 18:17:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:33.834 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.834 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.834 [2024-12-06 18:17:59.278348] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:33.834 [2024-12-06 18:17:59.278420] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:33.834 [2024-12-06 18:17:59.278458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:33.834 [2024-12-06 18:17:59.281010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:33.834 [2024-12-06 18:17:59.281104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:33.834 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.834 18:17:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:33.834 18:17:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:33.834 18:17:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:33.834 18:17:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:33.834 18:17:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:33.834 18:17:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:33.834 18:17:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:33.834 18:17:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:33.834 18:17:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:33.834 18:17:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:33.834 18:17:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.834 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.834 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.834 18:17:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:33.834 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.834 18:17:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:33.834 "name": "Existed_Raid", 00:19:33.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.834 "strip_size_kb": 64, 00:19:33.834 "state": "configuring", 00:19:33.834 "raid_level": "raid5f", 00:19:33.834 "superblock": false, 00:19:33.834 "num_base_bdevs": 4, 00:19:33.834 "num_base_bdevs_discovered": 3, 00:19:33.834 "num_base_bdevs_operational": 4, 00:19:33.834 "base_bdevs_list": [ 00:19:33.834 { 00:19:33.834 "name": "BaseBdev1", 00:19:33.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.834 "is_configured": false, 00:19:33.834 "data_offset": 0, 00:19:33.834 "data_size": 0 00:19:33.834 }, 00:19:33.834 { 00:19:33.834 "name": "BaseBdev2", 00:19:33.834 "uuid": "5a2aeb08-c1de-4f2e-8959-9a3d95a4d812", 00:19:33.834 "is_configured": true, 00:19:33.834 "data_offset": 0, 00:19:33.834 "data_size": 65536 00:19:33.834 }, 00:19:33.834 { 00:19:33.834 "name": "BaseBdev3", 00:19:33.835 "uuid": "d02df123-0c81-4d3c-a08f-0c0e096335d7", 00:19:33.835 "is_configured": true, 00:19:33.835 "data_offset": 0, 00:19:33.835 "data_size": 65536 00:19:33.835 }, 00:19:33.835 { 00:19:33.835 "name": "BaseBdev4", 00:19:33.835 "uuid": "8fa11999-330a-4c21-beea-d49d512abd8f", 00:19:33.835 "is_configured": true, 00:19:33.835 "data_offset": 0, 00:19:33.835 "data_size": 65536 00:19:33.835 } 00:19:33.835 ] 00:19:33.835 }' 00:19:33.835 18:17:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:33.835 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.403 18:17:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:34.403 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.403 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.403 [2024-12-06 18:17:59.798504] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:34.403 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.403 18:17:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:34.403 18:17:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:34.403 18:17:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:34.403 18:17:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:34.403 18:17:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:34.403 18:17:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:34.403 18:17:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:34.403 18:17:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:34.403 18:17:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:34.403 18:17:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:34.403 18:17:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.403 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.403 18:17:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:34.403 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.403 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.403 18:17:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:34.403 "name": "Existed_Raid", 00:19:34.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.403 "strip_size_kb": 64, 00:19:34.403 "state": "configuring", 00:19:34.403 "raid_level": "raid5f", 00:19:34.403 "superblock": false, 00:19:34.403 "num_base_bdevs": 4, 00:19:34.403 "num_base_bdevs_discovered": 2, 00:19:34.403 "num_base_bdevs_operational": 4, 00:19:34.403 "base_bdevs_list": [ 00:19:34.403 { 00:19:34.403 "name": "BaseBdev1", 00:19:34.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.403 "is_configured": false, 00:19:34.403 "data_offset": 0, 00:19:34.403 "data_size": 0 00:19:34.403 }, 00:19:34.403 { 00:19:34.403 "name": null, 00:19:34.403 "uuid": "5a2aeb08-c1de-4f2e-8959-9a3d95a4d812", 00:19:34.403 "is_configured": false, 00:19:34.403 "data_offset": 0, 00:19:34.403 "data_size": 65536 00:19:34.403 }, 00:19:34.403 { 00:19:34.403 "name": "BaseBdev3", 00:19:34.403 "uuid": "d02df123-0c81-4d3c-a08f-0c0e096335d7", 00:19:34.403 "is_configured": true, 00:19:34.403 "data_offset": 0, 00:19:34.403 "data_size": 65536 00:19:34.403 }, 00:19:34.403 { 00:19:34.403 "name": "BaseBdev4", 00:19:34.403 "uuid": "8fa11999-330a-4c21-beea-d49d512abd8f", 00:19:34.403 "is_configured": true, 00:19:34.403 "data_offset": 0, 00:19:34.403 "data_size": 65536 00:19:34.403 } 00:19:34.403 ] 00:19:34.403 }' 00:19:34.403 18:17:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:34.403 18:17:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.970 18:18:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.970 18:18:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:34.970 18:18:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.970 18:18:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.970 18:18:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.970 18:18:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:19:34.970 18:18:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:34.970 18:18:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.970 18:18:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.970 [2024-12-06 18:18:00.417063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:34.970 BaseBdev1 00:19:34.970 18:18:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.970 18:18:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:19:34.970 18:18:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:34.970 18:18:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:34.970 18:18:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:34.970 18:18:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:34.970 18:18:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:34.970 18:18:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:34.970 18:18:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.970 18:18:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.970 18:18:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.970 18:18:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:34.970 18:18:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.970 18:18:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.970 [ 00:19:34.970 { 00:19:34.970 "name": "BaseBdev1", 00:19:34.970 "aliases": [ 00:19:34.970 "56b58606-a399-4eb8-bcc5-5eafbdd7e44a" 00:19:34.970 ], 00:19:34.970 "product_name": "Malloc disk", 00:19:34.970 "block_size": 512, 00:19:34.970 "num_blocks": 65536, 00:19:34.970 "uuid": "56b58606-a399-4eb8-bcc5-5eafbdd7e44a", 00:19:34.970 "assigned_rate_limits": { 00:19:34.970 "rw_ios_per_sec": 0, 00:19:34.970 "rw_mbytes_per_sec": 0, 00:19:34.970 "r_mbytes_per_sec": 0, 00:19:34.970 "w_mbytes_per_sec": 0 00:19:34.970 }, 00:19:34.970 "claimed": true, 00:19:34.970 "claim_type": "exclusive_write", 00:19:34.970 "zoned": false, 00:19:34.970 "supported_io_types": { 00:19:34.970 "read": true, 00:19:34.970 "write": true, 00:19:34.970 "unmap": true, 00:19:34.970 "flush": true, 00:19:34.970 "reset": true, 00:19:34.970 "nvme_admin": false, 00:19:34.970 "nvme_io": false, 00:19:34.970 "nvme_io_md": false, 00:19:34.970 "write_zeroes": true, 00:19:34.970 "zcopy": true, 00:19:34.970 "get_zone_info": false, 00:19:34.970 "zone_management": false, 00:19:34.970 "zone_append": false, 00:19:34.970 "compare": false, 00:19:34.970 "compare_and_write": false, 00:19:34.970 "abort": true, 00:19:34.970 "seek_hole": false, 00:19:34.970 "seek_data": false, 00:19:34.970 "copy": true, 00:19:34.970 "nvme_iov_md": false 00:19:34.970 }, 00:19:34.970 "memory_domains": [ 00:19:34.970 { 00:19:34.970 "dma_device_id": "system", 00:19:34.970 "dma_device_type": 1 00:19:34.970 }, 00:19:34.970 { 00:19:34.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:34.970 "dma_device_type": 2 00:19:34.970 } 00:19:34.970 ], 00:19:34.970 "driver_specific": {} 00:19:34.970 } 00:19:34.970 ] 00:19:34.970 18:18:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.971 18:18:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:34.971 18:18:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:34.971 18:18:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:34.971 18:18:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:34.971 18:18:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:34.971 18:18:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:34.971 18:18:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:34.971 18:18:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:34.971 18:18:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:34.971 18:18:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:34.971 18:18:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:34.971 18:18:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.971 18:18:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.971 18:18:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:34.971 18:18:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.971 18:18:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.230 18:18:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:35.230 "name": "Existed_Raid", 00:19:35.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:35.230 "strip_size_kb": 64, 00:19:35.230 "state": "configuring", 00:19:35.230 "raid_level": "raid5f", 00:19:35.230 "superblock": false, 00:19:35.230 "num_base_bdevs": 4, 00:19:35.230 "num_base_bdevs_discovered": 3, 00:19:35.230 "num_base_bdevs_operational": 4, 00:19:35.230 "base_bdevs_list": [ 00:19:35.230 { 00:19:35.230 "name": "BaseBdev1", 00:19:35.230 "uuid": "56b58606-a399-4eb8-bcc5-5eafbdd7e44a", 00:19:35.230 "is_configured": true, 00:19:35.230 "data_offset": 0, 00:19:35.230 "data_size": 65536 00:19:35.230 }, 00:19:35.230 { 00:19:35.230 "name": null, 00:19:35.230 "uuid": "5a2aeb08-c1de-4f2e-8959-9a3d95a4d812", 00:19:35.230 "is_configured": false, 00:19:35.230 "data_offset": 0, 00:19:35.230 "data_size": 65536 00:19:35.230 }, 00:19:35.230 { 00:19:35.230 "name": "BaseBdev3", 00:19:35.230 "uuid": "d02df123-0c81-4d3c-a08f-0c0e096335d7", 00:19:35.230 "is_configured": true, 00:19:35.230 "data_offset": 0, 00:19:35.230 "data_size": 65536 00:19:35.230 }, 00:19:35.230 { 00:19:35.230 "name": "BaseBdev4", 00:19:35.230 "uuid": "8fa11999-330a-4c21-beea-d49d512abd8f", 00:19:35.230 "is_configured": true, 00:19:35.230 "data_offset": 0, 00:19:35.230 "data_size": 65536 00:19:35.230 } 00:19:35.230 ] 00:19:35.230 }' 00:19:35.230 18:18:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:35.230 18:18:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.489 18:18:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.489 18:18:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:35.489 18:18:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.489 18:18:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.489 18:18:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.747 18:18:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:19:35.747 18:18:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:19:35.747 18:18:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.747 18:18:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.747 [2024-12-06 18:18:01.009562] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:35.747 18:18:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.747 18:18:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:35.747 18:18:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:35.747 18:18:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:35.747 18:18:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:35.747 18:18:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:35.747 18:18:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:35.747 18:18:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:35.747 18:18:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:35.747 18:18:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:35.747 18:18:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:35.747 18:18:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.747 18:18:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.748 18:18:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.748 18:18:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:35.748 18:18:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.748 18:18:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:35.748 "name": "Existed_Raid", 00:19:35.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:35.748 "strip_size_kb": 64, 00:19:35.748 "state": "configuring", 00:19:35.748 "raid_level": "raid5f", 00:19:35.748 "superblock": false, 00:19:35.748 "num_base_bdevs": 4, 00:19:35.748 "num_base_bdevs_discovered": 2, 00:19:35.748 "num_base_bdevs_operational": 4, 00:19:35.748 "base_bdevs_list": [ 00:19:35.748 { 00:19:35.748 "name": "BaseBdev1", 00:19:35.748 "uuid": "56b58606-a399-4eb8-bcc5-5eafbdd7e44a", 00:19:35.748 "is_configured": true, 00:19:35.748 "data_offset": 0, 00:19:35.748 "data_size": 65536 00:19:35.748 }, 00:19:35.748 { 00:19:35.748 "name": null, 00:19:35.748 "uuid": "5a2aeb08-c1de-4f2e-8959-9a3d95a4d812", 00:19:35.748 "is_configured": false, 00:19:35.748 "data_offset": 0, 00:19:35.748 "data_size": 65536 00:19:35.748 }, 00:19:35.748 { 00:19:35.748 "name": null, 00:19:35.748 "uuid": "d02df123-0c81-4d3c-a08f-0c0e096335d7", 00:19:35.748 "is_configured": false, 00:19:35.748 "data_offset": 0, 00:19:35.748 "data_size": 65536 00:19:35.748 }, 00:19:35.748 { 00:19:35.748 "name": "BaseBdev4", 00:19:35.748 "uuid": "8fa11999-330a-4c21-beea-d49d512abd8f", 00:19:35.748 "is_configured": true, 00:19:35.748 "data_offset": 0, 00:19:35.748 "data_size": 65536 00:19:35.748 } 00:19:35.748 ] 00:19:35.748 }' 00:19:35.748 18:18:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:35.748 18:18:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.006 18:18:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.006 18:18:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.006 18:18:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.006 18:18:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:36.264 18:18:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.264 18:18:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:19:36.264 18:18:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:36.264 18:18:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.264 18:18:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.264 [2024-12-06 18:18:01.577722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:36.264 18:18:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.264 18:18:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:36.264 18:18:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:36.264 18:18:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:36.264 18:18:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:36.264 18:18:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:36.264 18:18:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:36.264 18:18:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:36.264 18:18:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:36.264 18:18:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:36.264 18:18:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:36.264 18:18:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.264 18:18:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:36.264 18:18:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.264 18:18:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.264 18:18:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.264 18:18:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:36.264 "name": "Existed_Raid", 00:19:36.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.264 "strip_size_kb": 64, 00:19:36.264 "state": "configuring", 00:19:36.264 "raid_level": "raid5f", 00:19:36.264 "superblock": false, 00:19:36.264 "num_base_bdevs": 4, 00:19:36.264 "num_base_bdevs_discovered": 3, 00:19:36.264 "num_base_bdevs_operational": 4, 00:19:36.264 "base_bdevs_list": [ 00:19:36.264 { 00:19:36.264 "name": "BaseBdev1", 00:19:36.264 "uuid": "56b58606-a399-4eb8-bcc5-5eafbdd7e44a", 00:19:36.264 "is_configured": true, 00:19:36.265 "data_offset": 0, 00:19:36.265 "data_size": 65536 00:19:36.265 }, 00:19:36.265 { 00:19:36.265 "name": null, 00:19:36.265 "uuid": "5a2aeb08-c1de-4f2e-8959-9a3d95a4d812", 00:19:36.265 "is_configured": false, 00:19:36.265 "data_offset": 0, 00:19:36.265 "data_size": 65536 00:19:36.265 }, 00:19:36.265 { 00:19:36.265 "name": "BaseBdev3", 00:19:36.265 "uuid": "d02df123-0c81-4d3c-a08f-0c0e096335d7", 00:19:36.265 "is_configured": true, 00:19:36.265 "data_offset": 0, 00:19:36.265 "data_size": 65536 00:19:36.265 }, 00:19:36.265 { 00:19:36.265 "name": "BaseBdev4", 00:19:36.265 "uuid": "8fa11999-330a-4c21-beea-d49d512abd8f", 00:19:36.265 "is_configured": true, 00:19:36.265 "data_offset": 0, 00:19:36.265 "data_size": 65536 00:19:36.265 } 00:19:36.265 ] 00:19:36.265 }' 00:19:36.265 18:18:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:36.265 18:18:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.834 18:18:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.834 18:18:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.834 18:18:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.834 18:18:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:36.834 18:18:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.834 18:18:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:19:36.834 18:18:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:36.834 18:18:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.834 18:18:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.834 [2024-12-06 18:18:02.121940] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:36.834 18:18:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.834 18:18:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:36.834 18:18:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:36.834 18:18:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:36.834 18:18:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:36.834 18:18:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:36.834 18:18:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:36.834 18:18:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:36.834 18:18:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:36.834 18:18:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:36.834 18:18:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:36.834 18:18:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:36.834 18:18:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.834 18:18:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.834 18:18:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.834 18:18:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.834 18:18:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:36.834 "name": "Existed_Raid", 00:19:36.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.834 "strip_size_kb": 64, 00:19:36.834 "state": "configuring", 00:19:36.834 "raid_level": "raid5f", 00:19:36.834 "superblock": false, 00:19:36.834 "num_base_bdevs": 4, 00:19:36.834 "num_base_bdevs_discovered": 2, 00:19:36.834 "num_base_bdevs_operational": 4, 00:19:36.834 "base_bdevs_list": [ 00:19:36.834 { 00:19:36.834 "name": null, 00:19:36.834 "uuid": "56b58606-a399-4eb8-bcc5-5eafbdd7e44a", 00:19:36.834 "is_configured": false, 00:19:36.834 "data_offset": 0, 00:19:36.834 "data_size": 65536 00:19:36.834 }, 00:19:36.834 { 00:19:36.834 "name": null, 00:19:36.834 "uuid": "5a2aeb08-c1de-4f2e-8959-9a3d95a4d812", 00:19:36.834 "is_configured": false, 00:19:36.834 "data_offset": 0, 00:19:36.834 "data_size": 65536 00:19:36.834 }, 00:19:36.834 { 00:19:36.834 "name": "BaseBdev3", 00:19:36.834 "uuid": "d02df123-0c81-4d3c-a08f-0c0e096335d7", 00:19:36.834 "is_configured": true, 00:19:36.834 "data_offset": 0, 00:19:36.834 "data_size": 65536 00:19:36.834 }, 00:19:36.834 { 00:19:36.834 "name": "BaseBdev4", 00:19:36.834 "uuid": "8fa11999-330a-4c21-beea-d49d512abd8f", 00:19:36.834 "is_configured": true, 00:19:36.835 "data_offset": 0, 00:19:36.835 "data_size": 65536 00:19:36.835 } 00:19:36.835 ] 00:19:36.835 }' 00:19:36.835 18:18:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:36.835 18:18:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.401 18:18:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.401 18:18:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:37.401 18:18:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.401 18:18:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.401 18:18:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.401 18:18:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:19:37.401 18:18:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:37.401 18:18:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.401 18:18:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.401 [2024-12-06 18:18:02.777015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:37.401 18:18:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.401 18:18:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:37.401 18:18:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:37.401 18:18:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:37.401 18:18:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:37.401 18:18:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:37.401 18:18:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:37.401 18:18:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:37.401 18:18:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:37.401 18:18:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:37.401 18:18:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:37.401 18:18:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.401 18:18:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.401 18:18:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:37.401 18:18:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.401 18:18:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.401 18:18:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:37.401 "name": "Existed_Raid", 00:19:37.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.401 "strip_size_kb": 64, 00:19:37.401 "state": "configuring", 00:19:37.401 "raid_level": "raid5f", 00:19:37.401 "superblock": false, 00:19:37.401 "num_base_bdevs": 4, 00:19:37.401 "num_base_bdevs_discovered": 3, 00:19:37.401 "num_base_bdevs_operational": 4, 00:19:37.401 "base_bdevs_list": [ 00:19:37.401 { 00:19:37.401 "name": null, 00:19:37.401 "uuid": "56b58606-a399-4eb8-bcc5-5eafbdd7e44a", 00:19:37.401 "is_configured": false, 00:19:37.401 "data_offset": 0, 00:19:37.401 "data_size": 65536 00:19:37.401 }, 00:19:37.401 { 00:19:37.401 "name": "BaseBdev2", 00:19:37.401 "uuid": "5a2aeb08-c1de-4f2e-8959-9a3d95a4d812", 00:19:37.401 "is_configured": true, 00:19:37.401 "data_offset": 0, 00:19:37.401 "data_size": 65536 00:19:37.401 }, 00:19:37.401 { 00:19:37.401 "name": "BaseBdev3", 00:19:37.401 "uuid": "d02df123-0c81-4d3c-a08f-0c0e096335d7", 00:19:37.401 "is_configured": true, 00:19:37.401 "data_offset": 0, 00:19:37.401 "data_size": 65536 00:19:37.401 }, 00:19:37.401 { 00:19:37.401 "name": "BaseBdev4", 00:19:37.401 "uuid": "8fa11999-330a-4c21-beea-d49d512abd8f", 00:19:37.401 "is_configured": true, 00:19:37.401 "data_offset": 0, 00:19:37.401 "data_size": 65536 00:19:37.401 } 00:19:37.401 ] 00:19:37.401 }' 00:19:37.401 18:18:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:37.401 18:18:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.967 18:18:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.967 18:18:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.967 18:18:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:37.967 18:18:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.967 18:18:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.967 18:18:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:19:37.967 18:18:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.967 18:18:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.967 18:18:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.967 18:18:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:37.967 18:18:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.967 18:18:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 56b58606-a399-4eb8-bcc5-5eafbdd7e44a 00:19:37.967 18:18:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.967 18:18:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.967 [2024-12-06 18:18:03.452611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:37.967 [2024-12-06 18:18:03.452673] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:37.967 [2024-12-06 18:18:03.452685] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:19:37.967 [2024-12-06 18:18:03.453059] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:19:37.967 [2024-12-06 18:18:03.459559] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:37.967 [2024-12-06 18:18:03.459767] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:19:37.967 [2024-12-06 18:18:03.460118] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:37.967 NewBaseBdev 00:19:37.967 18:18:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.967 18:18:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:19:37.967 18:18:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:19:37.967 18:18:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:37.967 18:18:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:37.967 18:18:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:37.967 18:18:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:37.967 18:18:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:37.967 18:18:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.967 18:18:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.968 18:18:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.968 18:18:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:37.968 18:18:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.968 18:18:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.968 [ 00:19:37.968 { 00:19:37.968 "name": "NewBaseBdev", 00:19:37.968 "aliases": [ 00:19:37.968 "56b58606-a399-4eb8-bcc5-5eafbdd7e44a" 00:19:37.968 ], 00:19:37.968 "product_name": "Malloc disk", 00:19:37.968 "block_size": 512, 00:19:37.968 "num_blocks": 65536, 00:19:37.968 "uuid": "56b58606-a399-4eb8-bcc5-5eafbdd7e44a", 00:19:37.968 "assigned_rate_limits": { 00:19:37.968 "rw_ios_per_sec": 0, 00:19:37.968 "rw_mbytes_per_sec": 0, 00:19:37.968 "r_mbytes_per_sec": 0, 00:19:37.968 "w_mbytes_per_sec": 0 00:19:37.968 }, 00:19:37.968 "claimed": true, 00:19:37.968 "claim_type": "exclusive_write", 00:19:37.968 "zoned": false, 00:19:37.968 "supported_io_types": { 00:19:37.968 "read": true, 00:19:37.968 "write": true, 00:19:38.226 "unmap": true, 00:19:38.226 "flush": true, 00:19:38.226 "reset": true, 00:19:38.226 "nvme_admin": false, 00:19:38.226 "nvme_io": false, 00:19:38.226 "nvme_io_md": false, 00:19:38.226 "write_zeroes": true, 00:19:38.226 "zcopy": true, 00:19:38.226 "get_zone_info": false, 00:19:38.226 "zone_management": false, 00:19:38.226 "zone_append": false, 00:19:38.226 "compare": false, 00:19:38.226 "compare_and_write": false, 00:19:38.226 "abort": true, 00:19:38.226 "seek_hole": false, 00:19:38.226 "seek_data": false, 00:19:38.226 "copy": true, 00:19:38.226 "nvme_iov_md": false 00:19:38.226 }, 00:19:38.226 "memory_domains": [ 00:19:38.226 { 00:19:38.226 "dma_device_id": "system", 00:19:38.226 "dma_device_type": 1 00:19:38.226 }, 00:19:38.226 { 00:19:38.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:38.226 "dma_device_type": 2 00:19:38.226 } 00:19:38.226 ], 00:19:38.226 "driver_specific": {} 00:19:38.226 } 00:19:38.226 ] 00:19:38.226 18:18:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.226 18:18:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:38.226 18:18:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:19:38.226 18:18:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:38.226 18:18:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:38.226 18:18:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:38.226 18:18:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:38.226 18:18:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:38.226 18:18:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:38.226 18:18:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:38.226 18:18:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:38.226 18:18:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:38.226 18:18:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.226 18:18:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:38.226 18:18:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.226 18:18:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.226 18:18:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.226 18:18:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:38.226 "name": "Existed_Raid", 00:19:38.226 "uuid": "667253d9-e761-408a-8fc6-acd118cadb66", 00:19:38.226 "strip_size_kb": 64, 00:19:38.226 "state": "online", 00:19:38.226 "raid_level": "raid5f", 00:19:38.226 "superblock": false, 00:19:38.226 "num_base_bdevs": 4, 00:19:38.226 "num_base_bdevs_discovered": 4, 00:19:38.226 "num_base_bdevs_operational": 4, 00:19:38.226 "base_bdevs_list": [ 00:19:38.226 { 00:19:38.226 "name": "NewBaseBdev", 00:19:38.226 "uuid": "56b58606-a399-4eb8-bcc5-5eafbdd7e44a", 00:19:38.226 "is_configured": true, 00:19:38.226 "data_offset": 0, 00:19:38.226 "data_size": 65536 00:19:38.226 }, 00:19:38.226 { 00:19:38.226 "name": "BaseBdev2", 00:19:38.226 "uuid": "5a2aeb08-c1de-4f2e-8959-9a3d95a4d812", 00:19:38.226 "is_configured": true, 00:19:38.226 "data_offset": 0, 00:19:38.226 "data_size": 65536 00:19:38.226 }, 00:19:38.226 { 00:19:38.226 "name": "BaseBdev3", 00:19:38.226 "uuid": "d02df123-0c81-4d3c-a08f-0c0e096335d7", 00:19:38.226 "is_configured": true, 00:19:38.226 "data_offset": 0, 00:19:38.226 "data_size": 65536 00:19:38.226 }, 00:19:38.226 { 00:19:38.226 "name": "BaseBdev4", 00:19:38.226 "uuid": "8fa11999-330a-4c21-beea-d49d512abd8f", 00:19:38.226 "is_configured": true, 00:19:38.226 "data_offset": 0, 00:19:38.226 "data_size": 65536 00:19:38.226 } 00:19:38.226 ] 00:19:38.226 }' 00:19:38.226 18:18:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:38.226 18:18:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.483 18:18:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:19:38.483 18:18:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:38.483 18:18:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:38.483 18:18:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:38.483 18:18:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:38.483 18:18:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:38.483 18:18:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:38.483 18:18:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:38.483 18:18:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.483 18:18:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.483 [2024-12-06 18:18:03.999944] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:38.740 18:18:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.740 18:18:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:38.740 "name": "Existed_Raid", 00:19:38.740 "aliases": [ 00:19:38.740 "667253d9-e761-408a-8fc6-acd118cadb66" 00:19:38.740 ], 00:19:38.740 "product_name": "Raid Volume", 00:19:38.740 "block_size": 512, 00:19:38.740 "num_blocks": 196608, 00:19:38.740 "uuid": "667253d9-e761-408a-8fc6-acd118cadb66", 00:19:38.740 "assigned_rate_limits": { 00:19:38.740 "rw_ios_per_sec": 0, 00:19:38.740 "rw_mbytes_per_sec": 0, 00:19:38.740 "r_mbytes_per_sec": 0, 00:19:38.740 "w_mbytes_per_sec": 0 00:19:38.740 }, 00:19:38.740 "claimed": false, 00:19:38.740 "zoned": false, 00:19:38.740 "supported_io_types": { 00:19:38.740 "read": true, 00:19:38.740 "write": true, 00:19:38.740 "unmap": false, 00:19:38.740 "flush": false, 00:19:38.740 "reset": true, 00:19:38.740 "nvme_admin": false, 00:19:38.740 "nvme_io": false, 00:19:38.740 "nvme_io_md": false, 00:19:38.740 "write_zeroes": true, 00:19:38.740 "zcopy": false, 00:19:38.740 "get_zone_info": false, 00:19:38.740 "zone_management": false, 00:19:38.740 "zone_append": false, 00:19:38.740 "compare": false, 00:19:38.740 "compare_and_write": false, 00:19:38.740 "abort": false, 00:19:38.740 "seek_hole": false, 00:19:38.740 "seek_data": false, 00:19:38.740 "copy": false, 00:19:38.740 "nvme_iov_md": false 00:19:38.740 }, 00:19:38.740 "driver_specific": { 00:19:38.740 "raid": { 00:19:38.740 "uuid": "667253d9-e761-408a-8fc6-acd118cadb66", 00:19:38.740 "strip_size_kb": 64, 00:19:38.740 "state": "online", 00:19:38.740 "raid_level": "raid5f", 00:19:38.740 "superblock": false, 00:19:38.740 "num_base_bdevs": 4, 00:19:38.740 "num_base_bdevs_discovered": 4, 00:19:38.740 "num_base_bdevs_operational": 4, 00:19:38.740 "base_bdevs_list": [ 00:19:38.740 { 00:19:38.740 "name": "NewBaseBdev", 00:19:38.740 "uuid": "56b58606-a399-4eb8-bcc5-5eafbdd7e44a", 00:19:38.740 "is_configured": true, 00:19:38.740 "data_offset": 0, 00:19:38.740 "data_size": 65536 00:19:38.740 }, 00:19:38.740 { 00:19:38.740 "name": "BaseBdev2", 00:19:38.740 "uuid": "5a2aeb08-c1de-4f2e-8959-9a3d95a4d812", 00:19:38.740 "is_configured": true, 00:19:38.740 "data_offset": 0, 00:19:38.740 "data_size": 65536 00:19:38.740 }, 00:19:38.740 { 00:19:38.740 "name": "BaseBdev3", 00:19:38.740 "uuid": "d02df123-0c81-4d3c-a08f-0c0e096335d7", 00:19:38.740 "is_configured": true, 00:19:38.740 "data_offset": 0, 00:19:38.740 "data_size": 65536 00:19:38.740 }, 00:19:38.740 { 00:19:38.740 "name": "BaseBdev4", 00:19:38.740 "uuid": "8fa11999-330a-4c21-beea-d49d512abd8f", 00:19:38.740 "is_configured": true, 00:19:38.740 "data_offset": 0, 00:19:38.740 "data_size": 65536 00:19:38.740 } 00:19:38.740 ] 00:19:38.740 } 00:19:38.740 } 00:19:38.740 }' 00:19:38.740 18:18:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:38.740 18:18:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:19:38.740 BaseBdev2 00:19:38.740 BaseBdev3 00:19:38.740 BaseBdev4' 00:19:38.740 18:18:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:38.740 18:18:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:38.740 18:18:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:38.741 18:18:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:19:38.741 18:18:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:38.741 18:18:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.741 18:18:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.741 18:18:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.741 18:18:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:38.741 18:18:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:38.741 18:18:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:38.741 18:18:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:38.741 18:18:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:38.741 18:18:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.741 18:18:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.741 18:18:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.741 18:18:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:38.741 18:18:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:38.741 18:18:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:38.741 18:18:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:38.741 18:18:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.741 18:18:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:38.741 18:18:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.998 18:18:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.998 18:18:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:38.998 18:18:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:38.998 18:18:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:38.998 18:18:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:19:38.998 18:18:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.998 18:18:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.998 18:18:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:38.998 18:18:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.998 18:18:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:38.998 18:18:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:38.998 18:18:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:38.999 18:18:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.999 18:18:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.999 [2024-12-06 18:18:04.351709] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:38.999 [2024-12-06 18:18:04.351765] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:38.999 [2024-12-06 18:18:04.351890] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:38.999 [2024-12-06 18:18:04.352269] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:38.999 [2024-12-06 18:18:04.352298] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:19:38.999 18:18:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.999 18:18:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83204 00:19:38.999 18:18:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 83204 ']' 00:19:38.999 18:18:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 83204 00:19:38.999 18:18:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:19:38.999 18:18:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:38.999 18:18:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83204 00:19:38.999 killing process with pid 83204 00:19:38.999 18:18:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:38.999 18:18:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:38.999 18:18:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83204' 00:19:38.999 18:18:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 83204 00:19:38.999 [2024-12-06 18:18:04.388101] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:38.999 18:18:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 83204 00:19:39.256 [2024-12-06 18:18:04.733631] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:40.626 ************************************ 00:19:40.626 END TEST raid5f_state_function_test 00:19:40.626 ************************************ 00:19:40.626 18:18:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:19:40.626 00:19:40.626 real 0m12.784s 00:19:40.626 user 0m21.195s 00:19:40.626 sys 0m1.772s 00:19:40.626 18:18:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:40.626 18:18:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.626 18:18:05 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:19:40.626 18:18:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:40.626 18:18:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:40.626 18:18:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:40.626 ************************************ 00:19:40.626 START TEST raid5f_state_function_test_sb 00:19:40.626 ************************************ 00:19:40.626 18:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:19:40.626 18:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:19:40.626 18:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:19:40.626 18:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:40.626 18:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:40.626 18:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:40.626 18:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:40.626 18:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:40.626 18:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:40.626 18:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:40.626 18:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:40.626 18:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:40.626 18:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:40.626 18:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:19:40.626 18:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:40.626 18:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:40.626 18:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:19:40.626 18:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:40.626 18:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:40.626 18:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:40.626 18:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:40.626 18:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:40.626 18:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:40.626 18:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:40.626 18:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:40.626 18:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:19:40.626 18:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:19:40.626 18:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:19:40.626 18:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:40.626 18:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:40.626 18:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83881 00:19:40.626 18:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:40.626 Process raid pid: 83881 00:19:40.626 18:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83881' 00:19:40.626 18:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83881 00:19:40.626 18:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83881 ']' 00:19:40.627 18:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.627 18:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:40.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.627 18:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.627 18:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:40.627 18:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.627 [2024-12-06 18:18:05.970760] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:19:40.627 [2024-12-06 18:18:05.971033] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:40.884 [2024-12-06 18:18:06.161583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.884 [2024-12-06 18:18:06.292478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.142 [2024-12-06 18:18:06.496600] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:41.142 [2024-12-06 18:18:06.496658] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:41.710 18:18:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:41.710 18:18:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:19:41.710 18:18:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:41.710 18:18:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.710 18:18:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.710 [2024-12-06 18:18:06.961973] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:41.710 [2024-12-06 18:18:06.962044] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:41.710 [2024-12-06 18:18:06.962062] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:41.710 [2024-12-06 18:18:06.962079] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:41.710 [2024-12-06 18:18:06.962089] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:41.710 [2024-12-06 18:18:06.962103] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:41.710 [2024-12-06 18:18:06.962112] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:41.710 [2024-12-06 18:18:06.962127] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:41.710 18:18:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.710 18:18:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:41.710 18:18:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:41.710 18:18:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:41.710 18:18:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:41.710 18:18:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:41.710 18:18:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:41.710 18:18:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:41.710 18:18:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:41.710 18:18:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:41.710 18:18:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:41.710 18:18:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.710 18:18:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:41.710 18:18:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.710 18:18:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.710 18:18:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.710 18:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:41.710 "name": "Existed_Raid", 00:19:41.710 "uuid": "e9d6dab2-c84a-49e4-886a-dc7e986b4443", 00:19:41.710 "strip_size_kb": 64, 00:19:41.710 "state": "configuring", 00:19:41.710 "raid_level": "raid5f", 00:19:41.710 "superblock": true, 00:19:41.710 "num_base_bdevs": 4, 00:19:41.710 "num_base_bdevs_discovered": 0, 00:19:41.710 "num_base_bdevs_operational": 4, 00:19:41.710 "base_bdevs_list": [ 00:19:41.710 { 00:19:41.710 "name": "BaseBdev1", 00:19:41.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.710 "is_configured": false, 00:19:41.710 "data_offset": 0, 00:19:41.710 "data_size": 0 00:19:41.710 }, 00:19:41.710 { 00:19:41.710 "name": "BaseBdev2", 00:19:41.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.710 "is_configured": false, 00:19:41.710 "data_offset": 0, 00:19:41.710 "data_size": 0 00:19:41.710 }, 00:19:41.710 { 00:19:41.710 "name": "BaseBdev3", 00:19:41.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.710 "is_configured": false, 00:19:41.710 "data_offset": 0, 00:19:41.710 "data_size": 0 00:19:41.710 }, 00:19:41.710 { 00:19:41.710 "name": "BaseBdev4", 00:19:41.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.710 "is_configured": false, 00:19:41.710 "data_offset": 0, 00:19:41.710 "data_size": 0 00:19:41.711 } 00:19:41.711 ] 00:19:41.711 }' 00:19:41.711 18:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:41.711 18:18:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.284 18:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:42.284 18:18:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.284 18:18:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.284 [2024-12-06 18:18:07.526033] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:42.284 [2024-12-06 18:18:07.526085] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:42.284 18:18:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.284 18:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:42.284 18:18:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.284 18:18:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.284 [2024-12-06 18:18:07.538076] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:42.284 [2024-12-06 18:18:07.538143] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:42.284 [2024-12-06 18:18:07.538159] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:42.284 [2024-12-06 18:18:07.538175] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:42.284 [2024-12-06 18:18:07.538185] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:42.284 [2024-12-06 18:18:07.538199] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:42.284 [2024-12-06 18:18:07.538209] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:42.284 [2024-12-06 18:18:07.538223] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:42.284 18:18:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.284 18:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:42.284 18:18:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.284 18:18:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.284 [2024-12-06 18:18:07.583133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:42.284 BaseBdev1 00:19:42.284 18:18:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.284 18:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:42.284 18:18:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:42.284 18:18:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:42.284 18:18:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:42.284 18:18:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:42.284 18:18:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:42.284 18:18:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:42.284 18:18:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.284 18:18:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.284 18:18:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.284 18:18:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:42.284 18:18:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.284 18:18:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.284 [ 00:19:42.284 { 00:19:42.284 "name": "BaseBdev1", 00:19:42.284 "aliases": [ 00:19:42.284 "2897a98e-38c8-4154-b791-1c23e4f46d98" 00:19:42.284 ], 00:19:42.284 "product_name": "Malloc disk", 00:19:42.284 "block_size": 512, 00:19:42.284 "num_blocks": 65536, 00:19:42.284 "uuid": "2897a98e-38c8-4154-b791-1c23e4f46d98", 00:19:42.284 "assigned_rate_limits": { 00:19:42.284 "rw_ios_per_sec": 0, 00:19:42.284 "rw_mbytes_per_sec": 0, 00:19:42.284 "r_mbytes_per_sec": 0, 00:19:42.284 "w_mbytes_per_sec": 0 00:19:42.284 }, 00:19:42.284 "claimed": true, 00:19:42.284 "claim_type": "exclusive_write", 00:19:42.284 "zoned": false, 00:19:42.284 "supported_io_types": { 00:19:42.284 "read": true, 00:19:42.284 "write": true, 00:19:42.284 "unmap": true, 00:19:42.284 "flush": true, 00:19:42.284 "reset": true, 00:19:42.284 "nvme_admin": false, 00:19:42.284 "nvme_io": false, 00:19:42.284 "nvme_io_md": false, 00:19:42.284 "write_zeroes": true, 00:19:42.284 "zcopy": true, 00:19:42.284 "get_zone_info": false, 00:19:42.284 "zone_management": false, 00:19:42.284 "zone_append": false, 00:19:42.284 "compare": false, 00:19:42.284 "compare_and_write": false, 00:19:42.284 "abort": true, 00:19:42.284 "seek_hole": false, 00:19:42.284 "seek_data": false, 00:19:42.284 "copy": true, 00:19:42.284 "nvme_iov_md": false 00:19:42.284 }, 00:19:42.284 "memory_domains": [ 00:19:42.284 { 00:19:42.284 "dma_device_id": "system", 00:19:42.284 "dma_device_type": 1 00:19:42.284 }, 00:19:42.284 { 00:19:42.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:42.284 "dma_device_type": 2 00:19:42.284 } 00:19:42.284 ], 00:19:42.284 "driver_specific": {} 00:19:42.284 } 00:19:42.284 ] 00:19:42.285 18:18:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.285 18:18:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:42.285 18:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:42.285 18:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:42.285 18:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:42.285 18:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:42.285 18:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:42.285 18:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:42.285 18:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:42.285 18:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:42.285 18:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:42.285 18:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:42.285 18:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.285 18:18:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.285 18:18:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.285 18:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:42.285 18:18:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.285 18:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:42.285 "name": "Existed_Raid", 00:19:42.285 "uuid": "15b93523-fc58-443e-951d-4f9b3e665b10", 00:19:42.285 "strip_size_kb": 64, 00:19:42.285 "state": "configuring", 00:19:42.285 "raid_level": "raid5f", 00:19:42.285 "superblock": true, 00:19:42.285 "num_base_bdevs": 4, 00:19:42.285 "num_base_bdevs_discovered": 1, 00:19:42.285 "num_base_bdevs_operational": 4, 00:19:42.285 "base_bdevs_list": [ 00:19:42.285 { 00:19:42.285 "name": "BaseBdev1", 00:19:42.285 "uuid": "2897a98e-38c8-4154-b791-1c23e4f46d98", 00:19:42.285 "is_configured": true, 00:19:42.285 "data_offset": 2048, 00:19:42.285 "data_size": 63488 00:19:42.285 }, 00:19:42.285 { 00:19:42.285 "name": "BaseBdev2", 00:19:42.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.285 "is_configured": false, 00:19:42.285 "data_offset": 0, 00:19:42.285 "data_size": 0 00:19:42.285 }, 00:19:42.285 { 00:19:42.285 "name": "BaseBdev3", 00:19:42.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.285 "is_configured": false, 00:19:42.285 "data_offset": 0, 00:19:42.285 "data_size": 0 00:19:42.285 }, 00:19:42.285 { 00:19:42.285 "name": "BaseBdev4", 00:19:42.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.285 "is_configured": false, 00:19:42.285 "data_offset": 0, 00:19:42.285 "data_size": 0 00:19:42.285 } 00:19:42.285 ] 00:19:42.285 }' 00:19:42.285 18:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:42.285 18:18:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.852 18:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:42.852 18:18:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.852 18:18:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.852 [2024-12-06 18:18:08.163351] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:42.852 [2024-12-06 18:18:08.163429] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:42.852 18:18:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.852 18:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:42.852 18:18:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.852 18:18:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.852 [2024-12-06 18:18:08.171455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:42.852 [2024-12-06 18:18:08.173995] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:42.852 [2024-12-06 18:18:08.174061] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:42.852 [2024-12-06 18:18:08.174078] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:42.852 [2024-12-06 18:18:08.174096] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:42.852 [2024-12-06 18:18:08.174107] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:42.852 [2024-12-06 18:18:08.174131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:42.852 18:18:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.852 18:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:42.852 18:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:42.852 18:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:42.852 18:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:42.852 18:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:42.852 18:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:42.852 18:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:42.852 18:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:42.852 18:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:42.852 18:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:42.852 18:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:42.852 18:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:42.852 18:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.852 18:18:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.852 18:18:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.852 18:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:42.852 18:18:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.852 18:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:42.852 "name": "Existed_Raid", 00:19:42.852 "uuid": "8606218c-a73a-4e70-975a-fbc72bc03cb3", 00:19:42.852 "strip_size_kb": 64, 00:19:42.852 "state": "configuring", 00:19:42.852 "raid_level": "raid5f", 00:19:42.852 "superblock": true, 00:19:42.852 "num_base_bdevs": 4, 00:19:42.852 "num_base_bdevs_discovered": 1, 00:19:42.852 "num_base_bdevs_operational": 4, 00:19:42.852 "base_bdevs_list": [ 00:19:42.852 { 00:19:42.852 "name": "BaseBdev1", 00:19:42.852 "uuid": "2897a98e-38c8-4154-b791-1c23e4f46d98", 00:19:42.852 "is_configured": true, 00:19:42.852 "data_offset": 2048, 00:19:42.852 "data_size": 63488 00:19:42.852 }, 00:19:42.852 { 00:19:42.852 "name": "BaseBdev2", 00:19:42.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.852 "is_configured": false, 00:19:42.852 "data_offset": 0, 00:19:42.852 "data_size": 0 00:19:42.852 }, 00:19:42.852 { 00:19:42.852 "name": "BaseBdev3", 00:19:42.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.852 "is_configured": false, 00:19:42.852 "data_offset": 0, 00:19:42.852 "data_size": 0 00:19:42.852 }, 00:19:42.852 { 00:19:42.852 "name": "BaseBdev4", 00:19:42.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.852 "is_configured": false, 00:19:42.852 "data_offset": 0, 00:19:42.852 "data_size": 0 00:19:42.852 } 00:19:42.852 ] 00:19:42.852 }' 00:19:42.852 18:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:42.852 18:18:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.419 18:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:43.419 18:18:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.419 18:18:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.419 [2024-12-06 18:18:08.694826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:43.419 BaseBdev2 00:19:43.419 18:18:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.419 18:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:43.419 18:18:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:43.419 18:18:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:43.419 18:18:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:43.419 18:18:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:43.419 18:18:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:43.419 18:18:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:43.419 18:18:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.419 18:18:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.419 18:18:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.419 18:18:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:43.419 18:18:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.419 18:18:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.419 [ 00:19:43.419 { 00:19:43.419 "name": "BaseBdev2", 00:19:43.419 "aliases": [ 00:19:43.419 "7b05d4c7-3131-41cf-a3d3-64c953cf3be5" 00:19:43.419 ], 00:19:43.419 "product_name": "Malloc disk", 00:19:43.419 "block_size": 512, 00:19:43.419 "num_blocks": 65536, 00:19:43.419 "uuid": "7b05d4c7-3131-41cf-a3d3-64c953cf3be5", 00:19:43.419 "assigned_rate_limits": { 00:19:43.419 "rw_ios_per_sec": 0, 00:19:43.419 "rw_mbytes_per_sec": 0, 00:19:43.419 "r_mbytes_per_sec": 0, 00:19:43.419 "w_mbytes_per_sec": 0 00:19:43.419 }, 00:19:43.419 "claimed": true, 00:19:43.419 "claim_type": "exclusive_write", 00:19:43.419 "zoned": false, 00:19:43.419 "supported_io_types": { 00:19:43.419 "read": true, 00:19:43.419 "write": true, 00:19:43.419 "unmap": true, 00:19:43.419 "flush": true, 00:19:43.419 "reset": true, 00:19:43.419 "nvme_admin": false, 00:19:43.419 "nvme_io": false, 00:19:43.419 "nvme_io_md": false, 00:19:43.419 "write_zeroes": true, 00:19:43.419 "zcopy": true, 00:19:43.419 "get_zone_info": false, 00:19:43.419 "zone_management": false, 00:19:43.419 "zone_append": false, 00:19:43.419 "compare": false, 00:19:43.419 "compare_and_write": false, 00:19:43.419 "abort": true, 00:19:43.419 "seek_hole": false, 00:19:43.419 "seek_data": false, 00:19:43.419 "copy": true, 00:19:43.419 "nvme_iov_md": false 00:19:43.419 }, 00:19:43.419 "memory_domains": [ 00:19:43.419 { 00:19:43.419 "dma_device_id": "system", 00:19:43.419 "dma_device_type": 1 00:19:43.419 }, 00:19:43.419 { 00:19:43.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:43.419 "dma_device_type": 2 00:19:43.419 } 00:19:43.419 ], 00:19:43.419 "driver_specific": {} 00:19:43.419 } 00:19:43.419 ] 00:19:43.419 18:18:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.419 18:18:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:43.419 18:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:43.419 18:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:43.419 18:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:43.419 18:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:43.419 18:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:43.419 18:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:43.419 18:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:43.419 18:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:43.419 18:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:43.419 18:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:43.419 18:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:43.419 18:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:43.419 18:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.419 18:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:43.419 18:18:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.419 18:18:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.419 18:18:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.419 18:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:43.419 "name": "Existed_Raid", 00:19:43.419 "uuid": "8606218c-a73a-4e70-975a-fbc72bc03cb3", 00:19:43.419 "strip_size_kb": 64, 00:19:43.419 "state": "configuring", 00:19:43.419 "raid_level": "raid5f", 00:19:43.419 "superblock": true, 00:19:43.419 "num_base_bdevs": 4, 00:19:43.419 "num_base_bdevs_discovered": 2, 00:19:43.420 "num_base_bdevs_operational": 4, 00:19:43.420 "base_bdevs_list": [ 00:19:43.420 { 00:19:43.420 "name": "BaseBdev1", 00:19:43.420 "uuid": "2897a98e-38c8-4154-b791-1c23e4f46d98", 00:19:43.420 "is_configured": true, 00:19:43.420 "data_offset": 2048, 00:19:43.420 "data_size": 63488 00:19:43.420 }, 00:19:43.420 { 00:19:43.420 "name": "BaseBdev2", 00:19:43.420 "uuid": "7b05d4c7-3131-41cf-a3d3-64c953cf3be5", 00:19:43.420 "is_configured": true, 00:19:43.420 "data_offset": 2048, 00:19:43.420 "data_size": 63488 00:19:43.420 }, 00:19:43.420 { 00:19:43.420 "name": "BaseBdev3", 00:19:43.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.420 "is_configured": false, 00:19:43.420 "data_offset": 0, 00:19:43.420 "data_size": 0 00:19:43.420 }, 00:19:43.420 { 00:19:43.420 "name": "BaseBdev4", 00:19:43.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.420 "is_configured": false, 00:19:43.420 "data_offset": 0, 00:19:43.420 "data_size": 0 00:19:43.420 } 00:19:43.420 ] 00:19:43.420 }' 00:19:43.420 18:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:43.420 18:18:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.986 18:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:43.986 18:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.987 18:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.987 [2024-12-06 18:18:09.286451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:43.987 BaseBdev3 00:19:43.987 18:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.987 18:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:19:43.987 18:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:19:43.987 18:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:43.987 18:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:43.987 18:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:43.987 18:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:43.987 18:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:43.987 18:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.987 18:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.987 18:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.987 18:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:43.987 18:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.987 18:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.987 [ 00:19:43.987 { 00:19:43.987 "name": "BaseBdev3", 00:19:43.987 "aliases": [ 00:19:43.987 "6baedb93-784c-4567-b996-488759b07b04" 00:19:43.987 ], 00:19:43.987 "product_name": "Malloc disk", 00:19:43.987 "block_size": 512, 00:19:43.987 "num_blocks": 65536, 00:19:43.987 "uuid": "6baedb93-784c-4567-b996-488759b07b04", 00:19:43.987 "assigned_rate_limits": { 00:19:43.987 "rw_ios_per_sec": 0, 00:19:43.987 "rw_mbytes_per_sec": 0, 00:19:43.987 "r_mbytes_per_sec": 0, 00:19:43.987 "w_mbytes_per_sec": 0 00:19:43.987 }, 00:19:43.987 "claimed": true, 00:19:43.987 "claim_type": "exclusive_write", 00:19:43.987 "zoned": false, 00:19:43.987 "supported_io_types": { 00:19:43.987 "read": true, 00:19:43.987 "write": true, 00:19:43.987 "unmap": true, 00:19:43.987 "flush": true, 00:19:43.987 "reset": true, 00:19:43.987 "nvme_admin": false, 00:19:43.987 "nvme_io": false, 00:19:43.987 "nvme_io_md": false, 00:19:43.987 "write_zeroes": true, 00:19:43.987 "zcopy": true, 00:19:43.987 "get_zone_info": false, 00:19:43.987 "zone_management": false, 00:19:43.987 "zone_append": false, 00:19:43.987 "compare": false, 00:19:43.987 "compare_and_write": false, 00:19:43.987 "abort": true, 00:19:43.987 "seek_hole": false, 00:19:43.987 "seek_data": false, 00:19:43.987 "copy": true, 00:19:43.987 "nvme_iov_md": false 00:19:43.987 }, 00:19:43.987 "memory_domains": [ 00:19:43.987 { 00:19:43.987 "dma_device_id": "system", 00:19:43.987 "dma_device_type": 1 00:19:43.987 }, 00:19:43.987 { 00:19:43.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:43.987 "dma_device_type": 2 00:19:43.987 } 00:19:43.987 ], 00:19:43.987 "driver_specific": {} 00:19:43.987 } 00:19:43.987 ] 00:19:43.987 18:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.987 18:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:43.987 18:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:43.987 18:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:43.987 18:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:43.987 18:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:43.987 18:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:43.987 18:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:43.987 18:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:43.987 18:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:43.987 18:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:43.987 18:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:43.987 18:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:43.987 18:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:43.987 18:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.987 18:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.987 18:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.987 18:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:43.987 18:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.987 18:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:43.987 "name": "Existed_Raid", 00:19:43.987 "uuid": "8606218c-a73a-4e70-975a-fbc72bc03cb3", 00:19:43.987 "strip_size_kb": 64, 00:19:43.987 "state": "configuring", 00:19:43.987 "raid_level": "raid5f", 00:19:43.987 "superblock": true, 00:19:43.987 "num_base_bdevs": 4, 00:19:43.987 "num_base_bdevs_discovered": 3, 00:19:43.987 "num_base_bdevs_operational": 4, 00:19:43.987 "base_bdevs_list": [ 00:19:43.987 { 00:19:43.987 "name": "BaseBdev1", 00:19:43.987 "uuid": "2897a98e-38c8-4154-b791-1c23e4f46d98", 00:19:43.987 "is_configured": true, 00:19:43.987 "data_offset": 2048, 00:19:43.987 "data_size": 63488 00:19:43.987 }, 00:19:43.987 { 00:19:43.987 "name": "BaseBdev2", 00:19:43.987 "uuid": "7b05d4c7-3131-41cf-a3d3-64c953cf3be5", 00:19:43.987 "is_configured": true, 00:19:43.987 "data_offset": 2048, 00:19:43.987 "data_size": 63488 00:19:43.987 }, 00:19:43.987 { 00:19:43.987 "name": "BaseBdev3", 00:19:43.987 "uuid": "6baedb93-784c-4567-b996-488759b07b04", 00:19:43.987 "is_configured": true, 00:19:43.987 "data_offset": 2048, 00:19:43.987 "data_size": 63488 00:19:43.987 }, 00:19:43.987 { 00:19:43.987 "name": "BaseBdev4", 00:19:43.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.987 "is_configured": false, 00:19:43.987 "data_offset": 0, 00:19:43.987 "data_size": 0 00:19:43.987 } 00:19:43.987 ] 00:19:43.987 }' 00:19:43.987 18:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:43.987 18:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.555 18:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:19:44.555 18:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.555 18:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.555 [2024-12-06 18:18:09.914226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:44.555 [2024-12-06 18:18:09.914612] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:44.555 [2024-12-06 18:18:09.914633] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:44.555 [2024-12-06 18:18:09.915039] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:44.555 BaseBdev4 00:19:44.555 18:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.555 18:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:19:44.555 18:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:19:44.555 18:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:44.555 18:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:44.555 18:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:44.555 18:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:44.555 18:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:44.555 18:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.555 18:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.555 [2024-12-06 18:18:09.922129] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:44.555 [2024-12-06 18:18:09.922299] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:44.555 [2024-12-06 18:18:09.922662] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:44.555 18:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.555 18:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:44.555 18:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.555 18:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.555 [ 00:19:44.555 { 00:19:44.555 "name": "BaseBdev4", 00:19:44.555 "aliases": [ 00:19:44.555 "51bb9cca-8f52-4cff-9a32-e191589a9bce" 00:19:44.555 ], 00:19:44.555 "product_name": "Malloc disk", 00:19:44.555 "block_size": 512, 00:19:44.555 "num_blocks": 65536, 00:19:44.555 "uuid": "51bb9cca-8f52-4cff-9a32-e191589a9bce", 00:19:44.555 "assigned_rate_limits": { 00:19:44.555 "rw_ios_per_sec": 0, 00:19:44.555 "rw_mbytes_per_sec": 0, 00:19:44.555 "r_mbytes_per_sec": 0, 00:19:44.556 "w_mbytes_per_sec": 0 00:19:44.556 }, 00:19:44.556 "claimed": true, 00:19:44.556 "claim_type": "exclusive_write", 00:19:44.556 "zoned": false, 00:19:44.556 "supported_io_types": { 00:19:44.556 "read": true, 00:19:44.556 "write": true, 00:19:44.556 "unmap": true, 00:19:44.556 "flush": true, 00:19:44.556 "reset": true, 00:19:44.556 "nvme_admin": false, 00:19:44.556 "nvme_io": false, 00:19:44.556 "nvme_io_md": false, 00:19:44.556 "write_zeroes": true, 00:19:44.556 "zcopy": true, 00:19:44.556 "get_zone_info": false, 00:19:44.556 "zone_management": false, 00:19:44.556 "zone_append": false, 00:19:44.556 "compare": false, 00:19:44.556 "compare_and_write": false, 00:19:44.556 "abort": true, 00:19:44.556 "seek_hole": false, 00:19:44.556 "seek_data": false, 00:19:44.556 "copy": true, 00:19:44.556 "nvme_iov_md": false 00:19:44.556 }, 00:19:44.556 "memory_domains": [ 00:19:44.556 { 00:19:44.556 "dma_device_id": "system", 00:19:44.556 "dma_device_type": 1 00:19:44.556 }, 00:19:44.556 { 00:19:44.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:44.556 "dma_device_type": 2 00:19:44.556 } 00:19:44.556 ], 00:19:44.556 "driver_specific": {} 00:19:44.556 } 00:19:44.556 ] 00:19:44.556 18:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.556 18:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:44.556 18:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:44.556 18:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:44.556 18:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:19:44.556 18:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:44.556 18:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:44.556 18:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:44.556 18:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:44.556 18:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:44.556 18:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:44.556 18:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:44.556 18:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:44.556 18:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:44.556 18:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.556 18:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.556 18:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:44.556 18:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.556 18:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.556 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:44.556 "name": "Existed_Raid", 00:19:44.556 "uuid": "8606218c-a73a-4e70-975a-fbc72bc03cb3", 00:19:44.556 "strip_size_kb": 64, 00:19:44.556 "state": "online", 00:19:44.556 "raid_level": "raid5f", 00:19:44.556 "superblock": true, 00:19:44.556 "num_base_bdevs": 4, 00:19:44.556 "num_base_bdevs_discovered": 4, 00:19:44.556 "num_base_bdevs_operational": 4, 00:19:44.556 "base_bdevs_list": [ 00:19:44.556 { 00:19:44.556 "name": "BaseBdev1", 00:19:44.556 "uuid": "2897a98e-38c8-4154-b791-1c23e4f46d98", 00:19:44.556 "is_configured": true, 00:19:44.556 "data_offset": 2048, 00:19:44.556 "data_size": 63488 00:19:44.556 }, 00:19:44.556 { 00:19:44.556 "name": "BaseBdev2", 00:19:44.556 "uuid": "7b05d4c7-3131-41cf-a3d3-64c953cf3be5", 00:19:44.556 "is_configured": true, 00:19:44.556 "data_offset": 2048, 00:19:44.556 "data_size": 63488 00:19:44.556 }, 00:19:44.556 { 00:19:44.556 "name": "BaseBdev3", 00:19:44.556 "uuid": "6baedb93-784c-4567-b996-488759b07b04", 00:19:44.556 "is_configured": true, 00:19:44.556 "data_offset": 2048, 00:19:44.556 "data_size": 63488 00:19:44.556 }, 00:19:44.556 { 00:19:44.556 "name": "BaseBdev4", 00:19:44.556 "uuid": "51bb9cca-8f52-4cff-9a32-e191589a9bce", 00:19:44.556 "is_configured": true, 00:19:44.556 "data_offset": 2048, 00:19:44.556 "data_size": 63488 00:19:44.556 } 00:19:44.556 ] 00:19:44.556 }' 00:19:44.556 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:44.556 18:18:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.122 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:45.122 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:45.122 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:45.122 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:45.122 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:19:45.122 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:45.122 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:45.122 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:45.122 18:18:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.122 18:18:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.122 [2024-12-06 18:18:10.478417] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:45.122 18:18:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.122 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:45.122 "name": "Existed_Raid", 00:19:45.122 "aliases": [ 00:19:45.122 "8606218c-a73a-4e70-975a-fbc72bc03cb3" 00:19:45.122 ], 00:19:45.122 "product_name": "Raid Volume", 00:19:45.122 "block_size": 512, 00:19:45.123 "num_blocks": 190464, 00:19:45.123 "uuid": "8606218c-a73a-4e70-975a-fbc72bc03cb3", 00:19:45.123 "assigned_rate_limits": { 00:19:45.123 "rw_ios_per_sec": 0, 00:19:45.123 "rw_mbytes_per_sec": 0, 00:19:45.123 "r_mbytes_per_sec": 0, 00:19:45.123 "w_mbytes_per_sec": 0 00:19:45.123 }, 00:19:45.123 "claimed": false, 00:19:45.123 "zoned": false, 00:19:45.123 "supported_io_types": { 00:19:45.123 "read": true, 00:19:45.123 "write": true, 00:19:45.123 "unmap": false, 00:19:45.123 "flush": false, 00:19:45.123 "reset": true, 00:19:45.123 "nvme_admin": false, 00:19:45.123 "nvme_io": false, 00:19:45.123 "nvme_io_md": false, 00:19:45.123 "write_zeroes": true, 00:19:45.123 "zcopy": false, 00:19:45.123 "get_zone_info": false, 00:19:45.123 "zone_management": false, 00:19:45.123 "zone_append": false, 00:19:45.123 "compare": false, 00:19:45.123 "compare_and_write": false, 00:19:45.123 "abort": false, 00:19:45.123 "seek_hole": false, 00:19:45.123 "seek_data": false, 00:19:45.123 "copy": false, 00:19:45.123 "nvme_iov_md": false 00:19:45.123 }, 00:19:45.123 "driver_specific": { 00:19:45.123 "raid": { 00:19:45.123 "uuid": "8606218c-a73a-4e70-975a-fbc72bc03cb3", 00:19:45.123 "strip_size_kb": 64, 00:19:45.123 "state": "online", 00:19:45.123 "raid_level": "raid5f", 00:19:45.123 "superblock": true, 00:19:45.123 "num_base_bdevs": 4, 00:19:45.123 "num_base_bdevs_discovered": 4, 00:19:45.123 "num_base_bdevs_operational": 4, 00:19:45.123 "base_bdevs_list": [ 00:19:45.123 { 00:19:45.123 "name": "BaseBdev1", 00:19:45.123 "uuid": "2897a98e-38c8-4154-b791-1c23e4f46d98", 00:19:45.123 "is_configured": true, 00:19:45.123 "data_offset": 2048, 00:19:45.123 "data_size": 63488 00:19:45.123 }, 00:19:45.123 { 00:19:45.123 "name": "BaseBdev2", 00:19:45.123 "uuid": "7b05d4c7-3131-41cf-a3d3-64c953cf3be5", 00:19:45.123 "is_configured": true, 00:19:45.123 "data_offset": 2048, 00:19:45.123 "data_size": 63488 00:19:45.123 }, 00:19:45.123 { 00:19:45.123 "name": "BaseBdev3", 00:19:45.123 "uuid": "6baedb93-784c-4567-b996-488759b07b04", 00:19:45.123 "is_configured": true, 00:19:45.123 "data_offset": 2048, 00:19:45.123 "data_size": 63488 00:19:45.123 }, 00:19:45.123 { 00:19:45.123 "name": "BaseBdev4", 00:19:45.123 "uuid": "51bb9cca-8f52-4cff-9a32-e191589a9bce", 00:19:45.123 "is_configured": true, 00:19:45.123 "data_offset": 2048, 00:19:45.123 "data_size": 63488 00:19:45.123 } 00:19:45.123 ] 00:19:45.123 } 00:19:45.123 } 00:19:45.123 }' 00:19:45.123 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:45.123 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:45.123 BaseBdev2 00:19:45.123 BaseBdev3 00:19:45.123 BaseBdev4' 00:19:45.123 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:45.123 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:45.123 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:45.123 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:45.123 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:45.123 18:18:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.123 18:18:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.381 18:18:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.381 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:45.381 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:45.381 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:45.381 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:45.381 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:45.381 18:18:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.381 18:18:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.381 18:18:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.381 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:45.381 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:45.381 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:45.381 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:45.381 18:18:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.381 18:18:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.381 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:45.381 18:18:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.381 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:45.381 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:45.381 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:45.381 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:19:45.381 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:45.381 18:18:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.381 18:18:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.381 18:18:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.381 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:45.381 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:45.381 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:45.381 18:18:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.381 18:18:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.381 [2024-12-06 18:18:10.826294] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:45.639 18:18:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.639 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:45.639 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:19:45.639 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:45.639 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:19:45.639 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:45.639 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:19:45.639 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:45.639 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:45.639 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:45.639 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:45.639 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:45.639 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:45.639 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:45.639 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:45.639 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:45.639 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:45.639 18:18:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.639 18:18:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.639 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:45.640 18:18:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.640 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:45.640 "name": "Existed_Raid", 00:19:45.640 "uuid": "8606218c-a73a-4e70-975a-fbc72bc03cb3", 00:19:45.640 "strip_size_kb": 64, 00:19:45.640 "state": "online", 00:19:45.640 "raid_level": "raid5f", 00:19:45.640 "superblock": true, 00:19:45.640 "num_base_bdevs": 4, 00:19:45.640 "num_base_bdevs_discovered": 3, 00:19:45.640 "num_base_bdevs_operational": 3, 00:19:45.640 "base_bdevs_list": [ 00:19:45.640 { 00:19:45.640 "name": null, 00:19:45.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.640 "is_configured": false, 00:19:45.640 "data_offset": 0, 00:19:45.640 "data_size": 63488 00:19:45.640 }, 00:19:45.640 { 00:19:45.640 "name": "BaseBdev2", 00:19:45.640 "uuid": "7b05d4c7-3131-41cf-a3d3-64c953cf3be5", 00:19:45.640 "is_configured": true, 00:19:45.640 "data_offset": 2048, 00:19:45.640 "data_size": 63488 00:19:45.640 }, 00:19:45.640 { 00:19:45.640 "name": "BaseBdev3", 00:19:45.640 "uuid": "6baedb93-784c-4567-b996-488759b07b04", 00:19:45.640 "is_configured": true, 00:19:45.640 "data_offset": 2048, 00:19:45.640 "data_size": 63488 00:19:45.640 }, 00:19:45.640 { 00:19:45.640 "name": "BaseBdev4", 00:19:45.640 "uuid": "51bb9cca-8f52-4cff-9a32-e191589a9bce", 00:19:45.640 "is_configured": true, 00:19:45.640 "data_offset": 2048, 00:19:45.640 "data_size": 63488 00:19:45.640 } 00:19:45.640 ] 00:19:45.640 }' 00:19:45.640 18:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:45.640 18:18:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.898 18:18:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:45.898 18:18:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:46.156 18:18:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.156 18:18:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:46.156 18:18:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.156 18:18:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.156 18:18:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.156 18:18:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:46.156 18:18:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:46.156 18:18:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:46.156 18:18:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.156 18:18:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.156 [2024-12-06 18:18:11.473599] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:46.156 [2024-12-06 18:18:11.473966] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:46.156 [2024-12-06 18:18:11.559799] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:46.156 18:18:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.156 18:18:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:46.156 18:18:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:46.156 18:18:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.156 18:18:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.156 18:18:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:46.156 18:18:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.156 18:18:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.156 18:18:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:46.156 18:18:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:46.157 18:18:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:19:46.157 18:18:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.157 18:18:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.157 [2024-12-06 18:18:11.615857] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:46.415 18:18:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.415 18:18:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:46.415 18:18:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:46.415 18:18:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.415 18:18:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:46.415 18:18:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.415 18:18:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.415 18:18:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.415 18:18:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:46.415 18:18:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:46.415 18:18:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:19:46.415 18:18:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.415 18:18:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.415 [2024-12-06 18:18:11.761424] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:46.415 [2024-12-06 18:18:11.761642] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:46.415 18:18:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.415 18:18:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:46.415 18:18:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:46.415 18:18:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.415 18:18:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.415 18:18:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:46.415 18:18:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.415 18:18:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.415 18:18:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:46.415 18:18:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:46.415 18:18:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:19:46.415 18:18:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:19:46.415 18:18:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:46.415 18:18:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:46.415 18:18:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.415 18:18:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.675 BaseBdev2 00:19:46.675 18:18:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.675 18:18:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:19:46.675 18:18:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:46.675 18:18:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:46.675 18:18:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:46.675 18:18:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:46.675 18:18:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:46.675 18:18:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:46.675 18:18:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.675 18:18:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.675 18:18:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.675 18:18:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:46.675 18:18:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.675 18:18:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.675 [ 00:19:46.675 { 00:19:46.675 "name": "BaseBdev2", 00:19:46.675 "aliases": [ 00:19:46.675 "97ced43d-86b4-4dc9-82d3-14130b8cdbb0" 00:19:46.675 ], 00:19:46.675 "product_name": "Malloc disk", 00:19:46.675 "block_size": 512, 00:19:46.675 "num_blocks": 65536, 00:19:46.675 "uuid": "97ced43d-86b4-4dc9-82d3-14130b8cdbb0", 00:19:46.675 "assigned_rate_limits": { 00:19:46.675 "rw_ios_per_sec": 0, 00:19:46.675 "rw_mbytes_per_sec": 0, 00:19:46.675 "r_mbytes_per_sec": 0, 00:19:46.675 "w_mbytes_per_sec": 0 00:19:46.675 }, 00:19:46.675 "claimed": false, 00:19:46.675 "zoned": false, 00:19:46.675 "supported_io_types": { 00:19:46.675 "read": true, 00:19:46.675 "write": true, 00:19:46.675 "unmap": true, 00:19:46.675 "flush": true, 00:19:46.675 "reset": true, 00:19:46.675 "nvme_admin": false, 00:19:46.675 "nvme_io": false, 00:19:46.675 "nvme_io_md": false, 00:19:46.675 "write_zeroes": true, 00:19:46.675 "zcopy": true, 00:19:46.675 "get_zone_info": false, 00:19:46.675 "zone_management": false, 00:19:46.675 "zone_append": false, 00:19:46.675 "compare": false, 00:19:46.675 "compare_and_write": false, 00:19:46.675 "abort": true, 00:19:46.675 "seek_hole": false, 00:19:46.675 "seek_data": false, 00:19:46.675 "copy": true, 00:19:46.675 "nvme_iov_md": false 00:19:46.675 }, 00:19:46.675 "memory_domains": [ 00:19:46.675 { 00:19:46.675 "dma_device_id": "system", 00:19:46.675 "dma_device_type": 1 00:19:46.675 }, 00:19:46.675 { 00:19:46.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:46.675 "dma_device_type": 2 00:19:46.675 } 00:19:46.675 ], 00:19:46.675 "driver_specific": {} 00:19:46.675 } 00:19:46.675 ] 00:19:46.675 18:18:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.675 18:18:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:46.675 18:18:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:46.675 18:18:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:46.675 18:18:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:46.675 18:18:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.675 18:18:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.675 BaseBdev3 00:19:46.675 18:18:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.675 18:18:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:19:46.675 18:18:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:19:46.675 18:18:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:46.675 18:18:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:46.675 18:18:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:46.675 18:18:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:46.675 18:18:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:46.675 18:18:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.675 18:18:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.675 18:18:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.675 18:18:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:46.675 18:18:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.675 18:18:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.675 [ 00:19:46.675 { 00:19:46.675 "name": "BaseBdev3", 00:19:46.675 "aliases": [ 00:19:46.675 "b77f4898-1588-4204-bec5-7943e8c4d583" 00:19:46.675 ], 00:19:46.675 "product_name": "Malloc disk", 00:19:46.675 "block_size": 512, 00:19:46.675 "num_blocks": 65536, 00:19:46.675 "uuid": "b77f4898-1588-4204-bec5-7943e8c4d583", 00:19:46.675 "assigned_rate_limits": { 00:19:46.675 "rw_ios_per_sec": 0, 00:19:46.675 "rw_mbytes_per_sec": 0, 00:19:46.675 "r_mbytes_per_sec": 0, 00:19:46.675 "w_mbytes_per_sec": 0 00:19:46.675 }, 00:19:46.675 "claimed": false, 00:19:46.675 "zoned": false, 00:19:46.675 "supported_io_types": { 00:19:46.675 "read": true, 00:19:46.675 "write": true, 00:19:46.675 "unmap": true, 00:19:46.675 "flush": true, 00:19:46.675 "reset": true, 00:19:46.675 "nvme_admin": false, 00:19:46.675 "nvme_io": false, 00:19:46.675 "nvme_io_md": false, 00:19:46.676 "write_zeroes": true, 00:19:46.676 "zcopy": true, 00:19:46.676 "get_zone_info": false, 00:19:46.676 "zone_management": false, 00:19:46.676 "zone_append": false, 00:19:46.676 "compare": false, 00:19:46.676 "compare_and_write": false, 00:19:46.676 "abort": true, 00:19:46.676 "seek_hole": false, 00:19:46.676 "seek_data": false, 00:19:46.676 "copy": true, 00:19:46.676 "nvme_iov_md": false 00:19:46.676 }, 00:19:46.676 "memory_domains": [ 00:19:46.676 { 00:19:46.676 "dma_device_id": "system", 00:19:46.676 "dma_device_type": 1 00:19:46.676 }, 00:19:46.676 { 00:19:46.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:46.676 "dma_device_type": 2 00:19:46.676 } 00:19:46.676 ], 00:19:46.676 "driver_specific": {} 00:19:46.676 } 00:19:46.676 ] 00:19:46.676 18:18:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.676 18:18:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:46.676 18:18:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:46.676 18:18:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:46.676 18:18:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:19:46.676 18:18:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.676 18:18:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.676 BaseBdev4 00:19:46.676 18:18:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.676 18:18:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:19:46.676 18:18:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:19:46.676 18:18:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:46.676 18:18:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:46.676 18:18:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:46.676 18:18:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:46.676 18:18:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:46.676 18:18:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.676 18:18:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.676 18:18:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.676 18:18:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:46.676 18:18:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.676 18:18:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.676 [ 00:19:46.676 { 00:19:46.676 "name": "BaseBdev4", 00:19:46.676 "aliases": [ 00:19:46.676 "a23452bd-3a7f-46f6-b9f0-26a589eb2ee5" 00:19:46.676 ], 00:19:46.676 "product_name": "Malloc disk", 00:19:46.676 "block_size": 512, 00:19:46.676 "num_blocks": 65536, 00:19:46.676 "uuid": "a23452bd-3a7f-46f6-b9f0-26a589eb2ee5", 00:19:46.676 "assigned_rate_limits": { 00:19:46.676 "rw_ios_per_sec": 0, 00:19:46.676 "rw_mbytes_per_sec": 0, 00:19:46.676 "r_mbytes_per_sec": 0, 00:19:46.676 "w_mbytes_per_sec": 0 00:19:46.676 }, 00:19:46.676 "claimed": false, 00:19:46.676 "zoned": false, 00:19:46.676 "supported_io_types": { 00:19:46.676 "read": true, 00:19:46.676 "write": true, 00:19:46.676 "unmap": true, 00:19:46.676 "flush": true, 00:19:46.676 "reset": true, 00:19:46.676 "nvme_admin": false, 00:19:46.676 "nvme_io": false, 00:19:46.676 "nvme_io_md": false, 00:19:46.676 "write_zeroes": true, 00:19:46.676 "zcopy": true, 00:19:46.676 "get_zone_info": false, 00:19:46.676 "zone_management": false, 00:19:46.676 "zone_append": false, 00:19:46.676 "compare": false, 00:19:46.676 "compare_and_write": false, 00:19:46.676 "abort": true, 00:19:46.676 "seek_hole": false, 00:19:46.676 "seek_data": false, 00:19:46.676 "copy": true, 00:19:46.676 "nvme_iov_md": false 00:19:46.676 }, 00:19:46.676 "memory_domains": [ 00:19:46.676 { 00:19:46.676 "dma_device_id": "system", 00:19:46.676 "dma_device_type": 1 00:19:46.676 }, 00:19:46.676 { 00:19:46.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:46.676 "dma_device_type": 2 00:19:46.676 } 00:19:46.676 ], 00:19:46.676 "driver_specific": {} 00:19:46.676 } 00:19:46.676 ] 00:19:46.676 18:18:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.676 18:18:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:46.676 18:18:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:46.676 18:18:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:46.676 18:18:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:46.676 18:18:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.676 18:18:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.676 [2024-12-06 18:18:12.139983] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:46.676 [2024-12-06 18:18:12.140040] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:46.676 [2024-12-06 18:18:12.140074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:46.676 [2024-12-06 18:18:12.142636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:46.676 [2024-12-06 18:18:12.142709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:46.676 18:18:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.676 18:18:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:46.676 18:18:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:46.676 18:18:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:46.676 18:18:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:46.676 18:18:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:46.676 18:18:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:46.676 18:18:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:46.676 18:18:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:46.676 18:18:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:46.676 18:18:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:46.676 18:18:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.676 18:18:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.676 18:18:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:46.676 18:18:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.676 18:18:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.935 18:18:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:46.935 "name": "Existed_Raid", 00:19:46.935 "uuid": "f9cc9a93-5e41-41ad-86c5-83c6bbd73817", 00:19:46.935 "strip_size_kb": 64, 00:19:46.935 "state": "configuring", 00:19:46.935 "raid_level": "raid5f", 00:19:46.935 "superblock": true, 00:19:46.935 "num_base_bdevs": 4, 00:19:46.935 "num_base_bdevs_discovered": 3, 00:19:46.935 "num_base_bdevs_operational": 4, 00:19:46.935 "base_bdevs_list": [ 00:19:46.935 { 00:19:46.935 "name": "BaseBdev1", 00:19:46.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.936 "is_configured": false, 00:19:46.936 "data_offset": 0, 00:19:46.936 "data_size": 0 00:19:46.936 }, 00:19:46.936 { 00:19:46.936 "name": "BaseBdev2", 00:19:46.936 "uuid": "97ced43d-86b4-4dc9-82d3-14130b8cdbb0", 00:19:46.936 "is_configured": true, 00:19:46.936 "data_offset": 2048, 00:19:46.936 "data_size": 63488 00:19:46.936 }, 00:19:46.936 { 00:19:46.936 "name": "BaseBdev3", 00:19:46.936 "uuid": "b77f4898-1588-4204-bec5-7943e8c4d583", 00:19:46.936 "is_configured": true, 00:19:46.936 "data_offset": 2048, 00:19:46.936 "data_size": 63488 00:19:46.936 }, 00:19:46.936 { 00:19:46.936 "name": "BaseBdev4", 00:19:46.936 "uuid": "a23452bd-3a7f-46f6-b9f0-26a589eb2ee5", 00:19:46.936 "is_configured": true, 00:19:46.936 "data_offset": 2048, 00:19:46.936 "data_size": 63488 00:19:46.936 } 00:19:46.936 ] 00:19:46.936 }' 00:19:46.936 18:18:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:46.936 18:18:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.195 18:18:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:47.195 18:18:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.195 18:18:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.195 [2024-12-06 18:18:12.660128] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:47.195 18:18:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.195 18:18:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:47.195 18:18:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:47.195 18:18:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:47.195 18:18:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:47.195 18:18:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:47.195 18:18:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:47.195 18:18:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:47.195 18:18:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:47.195 18:18:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:47.195 18:18:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:47.195 18:18:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.195 18:18:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.195 18:18:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:47.195 18:18:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.195 18:18:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.454 18:18:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:47.454 "name": "Existed_Raid", 00:19:47.454 "uuid": "f9cc9a93-5e41-41ad-86c5-83c6bbd73817", 00:19:47.454 "strip_size_kb": 64, 00:19:47.454 "state": "configuring", 00:19:47.454 "raid_level": "raid5f", 00:19:47.454 "superblock": true, 00:19:47.454 "num_base_bdevs": 4, 00:19:47.454 "num_base_bdevs_discovered": 2, 00:19:47.454 "num_base_bdevs_operational": 4, 00:19:47.454 "base_bdevs_list": [ 00:19:47.454 { 00:19:47.454 "name": "BaseBdev1", 00:19:47.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.454 "is_configured": false, 00:19:47.454 "data_offset": 0, 00:19:47.454 "data_size": 0 00:19:47.454 }, 00:19:47.454 { 00:19:47.454 "name": null, 00:19:47.454 "uuid": "97ced43d-86b4-4dc9-82d3-14130b8cdbb0", 00:19:47.454 "is_configured": false, 00:19:47.454 "data_offset": 0, 00:19:47.454 "data_size": 63488 00:19:47.454 }, 00:19:47.454 { 00:19:47.454 "name": "BaseBdev3", 00:19:47.454 "uuid": "b77f4898-1588-4204-bec5-7943e8c4d583", 00:19:47.454 "is_configured": true, 00:19:47.454 "data_offset": 2048, 00:19:47.454 "data_size": 63488 00:19:47.454 }, 00:19:47.454 { 00:19:47.454 "name": "BaseBdev4", 00:19:47.454 "uuid": "a23452bd-3a7f-46f6-b9f0-26a589eb2ee5", 00:19:47.454 "is_configured": true, 00:19:47.454 "data_offset": 2048, 00:19:47.454 "data_size": 63488 00:19:47.454 } 00:19:47.454 ] 00:19:47.454 }' 00:19:47.454 18:18:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:47.454 18:18:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.714 18:18:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.714 18:18:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.714 18:18:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.714 18:18:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:47.714 18:18:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.973 18:18:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:19:47.973 18:18:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:47.973 18:18:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.973 18:18:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.973 [2024-12-06 18:18:13.286879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:47.973 BaseBdev1 00:19:47.973 18:18:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.973 18:18:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:19:47.973 18:18:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:47.973 18:18:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:47.973 18:18:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:47.973 18:18:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:47.973 18:18:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:47.973 18:18:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:47.973 18:18:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.973 18:18:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.973 18:18:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.973 18:18:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:47.973 18:18:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.973 18:18:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.973 [ 00:19:47.973 { 00:19:47.973 "name": "BaseBdev1", 00:19:47.973 "aliases": [ 00:19:47.973 "67a115c3-95ba-45f4-bde6-b023ba129de9" 00:19:47.973 ], 00:19:47.973 "product_name": "Malloc disk", 00:19:47.973 "block_size": 512, 00:19:47.974 "num_blocks": 65536, 00:19:47.974 "uuid": "67a115c3-95ba-45f4-bde6-b023ba129de9", 00:19:47.974 "assigned_rate_limits": { 00:19:47.974 "rw_ios_per_sec": 0, 00:19:47.974 "rw_mbytes_per_sec": 0, 00:19:47.974 "r_mbytes_per_sec": 0, 00:19:47.974 "w_mbytes_per_sec": 0 00:19:47.974 }, 00:19:47.974 "claimed": true, 00:19:47.974 "claim_type": "exclusive_write", 00:19:47.974 "zoned": false, 00:19:47.974 "supported_io_types": { 00:19:47.974 "read": true, 00:19:47.974 "write": true, 00:19:47.974 "unmap": true, 00:19:47.974 "flush": true, 00:19:47.974 "reset": true, 00:19:47.974 "nvme_admin": false, 00:19:47.974 "nvme_io": false, 00:19:47.974 "nvme_io_md": false, 00:19:47.974 "write_zeroes": true, 00:19:47.974 "zcopy": true, 00:19:47.974 "get_zone_info": false, 00:19:47.974 "zone_management": false, 00:19:47.974 "zone_append": false, 00:19:47.974 "compare": false, 00:19:47.974 "compare_and_write": false, 00:19:47.974 "abort": true, 00:19:47.974 "seek_hole": false, 00:19:47.974 "seek_data": false, 00:19:47.974 "copy": true, 00:19:47.974 "nvme_iov_md": false 00:19:47.974 }, 00:19:47.974 "memory_domains": [ 00:19:47.974 { 00:19:47.974 "dma_device_id": "system", 00:19:47.974 "dma_device_type": 1 00:19:47.974 }, 00:19:47.974 { 00:19:47.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:47.974 "dma_device_type": 2 00:19:47.974 } 00:19:47.974 ], 00:19:47.974 "driver_specific": {} 00:19:47.974 } 00:19:47.974 ] 00:19:47.974 18:18:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.974 18:18:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:47.974 18:18:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:47.974 18:18:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:47.974 18:18:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:47.974 18:18:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:47.974 18:18:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:47.974 18:18:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:47.974 18:18:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:47.974 18:18:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:47.974 18:18:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:47.974 18:18:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:47.974 18:18:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.974 18:18:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:47.974 18:18:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.974 18:18:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.974 18:18:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.974 18:18:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:47.974 "name": "Existed_Raid", 00:19:47.974 "uuid": "f9cc9a93-5e41-41ad-86c5-83c6bbd73817", 00:19:47.974 "strip_size_kb": 64, 00:19:47.974 "state": "configuring", 00:19:47.974 "raid_level": "raid5f", 00:19:47.974 "superblock": true, 00:19:47.974 "num_base_bdevs": 4, 00:19:47.974 "num_base_bdevs_discovered": 3, 00:19:47.974 "num_base_bdevs_operational": 4, 00:19:47.974 "base_bdevs_list": [ 00:19:47.974 { 00:19:47.974 "name": "BaseBdev1", 00:19:47.974 "uuid": "67a115c3-95ba-45f4-bde6-b023ba129de9", 00:19:47.974 "is_configured": true, 00:19:47.974 "data_offset": 2048, 00:19:47.974 "data_size": 63488 00:19:47.974 }, 00:19:47.974 { 00:19:47.974 "name": null, 00:19:47.974 "uuid": "97ced43d-86b4-4dc9-82d3-14130b8cdbb0", 00:19:47.974 "is_configured": false, 00:19:47.974 "data_offset": 0, 00:19:47.974 "data_size": 63488 00:19:47.974 }, 00:19:47.974 { 00:19:47.974 "name": "BaseBdev3", 00:19:47.974 "uuid": "b77f4898-1588-4204-bec5-7943e8c4d583", 00:19:47.974 "is_configured": true, 00:19:47.974 "data_offset": 2048, 00:19:47.974 "data_size": 63488 00:19:47.974 }, 00:19:47.974 { 00:19:47.974 "name": "BaseBdev4", 00:19:47.974 "uuid": "a23452bd-3a7f-46f6-b9f0-26a589eb2ee5", 00:19:47.974 "is_configured": true, 00:19:47.974 "data_offset": 2048, 00:19:47.974 "data_size": 63488 00:19:47.974 } 00:19:47.974 ] 00:19:47.974 }' 00:19:47.974 18:18:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:47.974 18:18:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.542 18:18:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.542 18:18:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.542 18:18:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.542 18:18:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:48.542 18:18:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.542 18:18:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:19:48.542 18:18:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:19:48.542 18:18:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.542 18:18:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.542 [2024-12-06 18:18:13.895169] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:48.542 18:18:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.542 18:18:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:48.542 18:18:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:48.542 18:18:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:48.542 18:18:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:48.542 18:18:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:48.542 18:18:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:48.542 18:18:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:48.542 18:18:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:48.542 18:18:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:48.542 18:18:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:48.542 18:18:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.542 18:18:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:48.542 18:18:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.542 18:18:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.542 18:18:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.542 18:18:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:48.542 "name": "Existed_Raid", 00:19:48.542 "uuid": "f9cc9a93-5e41-41ad-86c5-83c6bbd73817", 00:19:48.542 "strip_size_kb": 64, 00:19:48.542 "state": "configuring", 00:19:48.542 "raid_level": "raid5f", 00:19:48.542 "superblock": true, 00:19:48.542 "num_base_bdevs": 4, 00:19:48.542 "num_base_bdevs_discovered": 2, 00:19:48.542 "num_base_bdevs_operational": 4, 00:19:48.542 "base_bdevs_list": [ 00:19:48.542 { 00:19:48.542 "name": "BaseBdev1", 00:19:48.542 "uuid": "67a115c3-95ba-45f4-bde6-b023ba129de9", 00:19:48.542 "is_configured": true, 00:19:48.542 "data_offset": 2048, 00:19:48.542 "data_size": 63488 00:19:48.542 }, 00:19:48.542 { 00:19:48.542 "name": null, 00:19:48.542 "uuid": "97ced43d-86b4-4dc9-82d3-14130b8cdbb0", 00:19:48.542 "is_configured": false, 00:19:48.542 "data_offset": 0, 00:19:48.542 "data_size": 63488 00:19:48.542 }, 00:19:48.542 { 00:19:48.542 "name": null, 00:19:48.542 "uuid": "b77f4898-1588-4204-bec5-7943e8c4d583", 00:19:48.542 "is_configured": false, 00:19:48.542 "data_offset": 0, 00:19:48.542 "data_size": 63488 00:19:48.542 }, 00:19:48.542 { 00:19:48.542 "name": "BaseBdev4", 00:19:48.542 "uuid": "a23452bd-3a7f-46f6-b9f0-26a589eb2ee5", 00:19:48.542 "is_configured": true, 00:19:48.542 "data_offset": 2048, 00:19:48.542 "data_size": 63488 00:19:48.542 } 00:19:48.542 ] 00:19:48.542 }' 00:19:48.542 18:18:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:48.542 18:18:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.109 18:18:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.109 18:18:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:49.109 18:18:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.109 18:18:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.109 18:18:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.109 18:18:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:19:49.109 18:18:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:49.110 18:18:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.110 18:18:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.110 [2024-12-06 18:18:14.479313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:49.110 18:18:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.110 18:18:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:49.110 18:18:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:49.110 18:18:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:49.110 18:18:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:49.110 18:18:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:49.110 18:18:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:49.110 18:18:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:49.110 18:18:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:49.110 18:18:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:49.110 18:18:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:49.110 18:18:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.110 18:18:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:49.110 18:18:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.110 18:18:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.110 18:18:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.110 18:18:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:49.110 "name": "Existed_Raid", 00:19:49.110 "uuid": "f9cc9a93-5e41-41ad-86c5-83c6bbd73817", 00:19:49.110 "strip_size_kb": 64, 00:19:49.110 "state": "configuring", 00:19:49.110 "raid_level": "raid5f", 00:19:49.110 "superblock": true, 00:19:49.110 "num_base_bdevs": 4, 00:19:49.110 "num_base_bdevs_discovered": 3, 00:19:49.110 "num_base_bdevs_operational": 4, 00:19:49.110 "base_bdevs_list": [ 00:19:49.110 { 00:19:49.110 "name": "BaseBdev1", 00:19:49.110 "uuid": "67a115c3-95ba-45f4-bde6-b023ba129de9", 00:19:49.110 "is_configured": true, 00:19:49.110 "data_offset": 2048, 00:19:49.110 "data_size": 63488 00:19:49.110 }, 00:19:49.110 { 00:19:49.110 "name": null, 00:19:49.110 "uuid": "97ced43d-86b4-4dc9-82d3-14130b8cdbb0", 00:19:49.110 "is_configured": false, 00:19:49.110 "data_offset": 0, 00:19:49.110 "data_size": 63488 00:19:49.110 }, 00:19:49.110 { 00:19:49.110 "name": "BaseBdev3", 00:19:49.110 "uuid": "b77f4898-1588-4204-bec5-7943e8c4d583", 00:19:49.110 "is_configured": true, 00:19:49.110 "data_offset": 2048, 00:19:49.110 "data_size": 63488 00:19:49.110 }, 00:19:49.110 { 00:19:49.110 "name": "BaseBdev4", 00:19:49.110 "uuid": "a23452bd-3a7f-46f6-b9f0-26a589eb2ee5", 00:19:49.110 "is_configured": true, 00:19:49.110 "data_offset": 2048, 00:19:49.110 "data_size": 63488 00:19:49.110 } 00:19:49.110 ] 00:19:49.110 }' 00:19:49.110 18:18:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:49.110 18:18:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.677 18:18:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.677 18:18:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:49.677 18:18:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.677 18:18:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.677 18:18:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.677 18:18:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:19:49.677 18:18:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:49.677 18:18:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.677 18:18:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.677 [2024-12-06 18:18:15.091543] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:49.677 18:18:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.677 18:18:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:49.677 18:18:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:49.677 18:18:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:49.677 18:18:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:49.677 18:18:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:49.677 18:18:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:49.677 18:18:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:49.677 18:18:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:49.677 18:18:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:49.677 18:18:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:49.677 18:18:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:49.677 18:18:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.677 18:18:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.677 18:18:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.937 18:18:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.937 18:18:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:49.938 "name": "Existed_Raid", 00:19:49.938 "uuid": "f9cc9a93-5e41-41ad-86c5-83c6bbd73817", 00:19:49.938 "strip_size_kb": 64, 00:19:49.938 "state": "configuring", 00:19:49.938 "raid_level": "raid5f", 00:19:49.938 "superblock": true, 00:19:49.938 "num_base_bdevs": 4, 00:19:49.938 "num_base_bdevs_discovered": 2, 00:19:49.938 "num_base_bdevs_operational": 4, 00:19:49.938 "base_bdevs_list": [ 00:19:49.938 { 00:19:49.938 "name": null, 00:19:49.938 "uuid": "67a115c3-95ba-45f4-bde6-b023ba129de9", 00:19:49.938 "is_configured": false, 00:19:49.938 "data_offset": 0, 00:19:49.938 "data_size": 63488 00:19:49.938 }, 00:19:49.938 { 00:19:49.938 "name": null, 00:19:49.938 "uuid": "97ced43d-86b4-4dc9-82d3-14130b8cdbb0", 00:19:49.938 "is_configured": false, 00:19:49.938 "data_offset": 0, 00:19:49.938 "data_size": 63488 00:19:49.938 }, 00:19:49.938 { 00:19:49.938 "name": "BaseBdev3", 00:19:49.938 "uuid": "b77f4898-1588-4204-bec5-7943e8c4d583", 00:19:49.938 "is_configured": true, 00:19:49.938 "data_offset": 2048, 00:19:49.938 "data_size": 63488 00:19:49.938 }, 00:19:49.938 { 00:19:49.938 "name": "BaseBdev4", 00:19:49.938 "uuid": "a23452bd-3a7f-46f6-b9f0-26a589eb2ee5", 00:19:49.938 "is_configured": true, 00:19:49.938 "data_offset": 2048, 00:19:49.938 "data_size": 63488 00:19:49.938 } 00:19:49.938 ] 00:19:49.938 }' 00:19:49.938 18:18:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:49.938 18:18:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.504 18:18:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.504 18:18:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.504 18:18:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.504 18:18:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:50.504 18:18:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.504 18:18:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:19:50.504 18:18:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:50.504 18:18:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.504 18:18:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.504 [2024-12-06 18:18:15.774638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:50.504 18:18:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.504 18:18:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:50.504 18:18:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:50.504 18:18:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:50.504 18:18:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:50.504 18:18:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:50.504 18:18:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:50.504 18:18:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:50.504 18:18:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:50.504 18:18:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:50.504 18:18:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:50.504 18:18:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.504 18:18:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.504 18:18:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:50.504 18:18:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.504 18:18:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.504 18:18:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:50.504 "name": "Existed_Raid", 00:19:50.504 "uuid": "f9cc9a93-5e41-41ad-86c5-83c6bbd73817", 00:19:50.504 "strip_size_kb": 64, 00:19:50.504 "state": "configuring", 00:19:50.504 "raid_level": "raid5f", 00:19:50.504 "superblock": true, 00:19:50.504 "num_base_bdevs": 4, 00:19:50.504 "num_base_bdevs_discovered": 3, 00:19:50.504 "num_base_bdevs_operational": 4, 00:19:50.504 "base_bdevs_list": [ 00:19:50.504 { 00:19:50.504 "name": null, 00:19:50.504 "uuid": "67a115c3-95ba-45f4-bde6-b023ba129de9", 00:19:50.504 "is_configured": false, 00:19:50.504 "data_offset": 0, 00:19:50.504 "data_size": 63488 00:19:50.504 }, 00:19:50.504 { 00:19:50.504 "name": "BaseBdev2", 00:19:50.504 "uuid": "97ced43d-86b4-4dc9-82d3-14130b8cdbb0", 00:19:50.504 "is_configured": true, 00:19:50.504 "data_offset": 2048, 00:19:50.504 "data_size": 63488 00:19:50.504 }, 00:19:50.504 { 00:19:50.504 "name": "BaseBdev3", 00:19:50.504 "uuid": "b77f4898-1588-4204-bec5-7943e8c4d583", 00:19:50.504 "is_configured": true, 00:19:50.504 "data_offset": 2048, 00:19:50.504 "data_size": 63488 00:19:50.504 }, 00:19:50.504 { 00:19:50.504 "name": "BaseBdev4", 00:19:50.504 "uuid": "a23452bd-3a7f-46f6-b9f0-26a589eb2ee5", 00:19:50.504 "is_configured": true, 00:19:50.504 "data_offset": 2048, 00:19:50.504 "data_size": 63488 00:19:50.504 } 00:19:50.504 ] 00:19:50.504 }' 00:19:50.504 18:18:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:50.504 18:18:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.103 18:18:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.103 18:18:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.103 18:18:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:51.103 18:18:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.103 18:18:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.103 18:18:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:19:51.103 18:18:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.103 18:18:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.103 18:18:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.103 18:18:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:51.103 18:18:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.103 18:18:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 67a115c3-95ba-45f4-bde6-b023ba129de9 00:19:51.103 18:18:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.103 18:18:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.103 [2024-12-06 18:18:16.450423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:51.103 [2024-12-06 18:18:16.450805] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:51.103 [2024-12-06 18:18:16.450825] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:51.103 NewBaseBdev 00:19:51.103 [2024-12-06 18:18:16.451158] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:19:51.103 18:18:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.103 18:18:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:19:51.103 18:18:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:19:51.103 18:18:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:51.103 18:18:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:51.103 18:18:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:51.103 18:18:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:51.103 18:18:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:51.103 18:18:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.103 18:18:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.103 [2024-12-06 18:18:16.457777] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:51.103 [2024-12-06 18:18:16.457810] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:19:51.103 [2024-12-06 18:18:16.458131] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:51.103 18:18:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.103 18:18:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:51.103 18:18:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.103 18:18:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.103 [ 00:19:51.103 { 00:19:51.103 "name": "NewBaseBdev", 00:19:51.103 "aliases": [ 00:19:51.103 "67a115c3-95ba-45f4-bde6-b023ba129de9" 00:19:51.103 ], 00:19:51.103 "product_name": "Malloc disk", 00:19:51.103 "block_size": 512, 00:19:51.103 "num_blocks": 65536, 00:19:51.103 "uuid": "67a115c3-95ba-45f4-bde6-b023ba129de9", 00:19:51.103 "assigned_rate_limits": { 00:19:51.103 "rw_ios_per_sec": 0, 00:19:51.103 "rw_mbytes_per_sec": 0, 00:19:51.103 "r_mbytes_per_sec": 0, 00:19:51.103 "w_mbytes_per_sec": 0 00:19:51.103 }, 00:19:51.103 "claimed": true, 00:19:51.103 "claim_type": "exclusive_write", 00:19:51.103 "zoned": false, 00:19:51.103 "supported_io_types": { 00:19:51.103 "read": true, 00:19:51.103 "write": true, 00:19:51.103 "unmap": true, 00:19:51.103 "flush": true, 00:19:51.103 "reset": true, 00:19:51.103 "nvme_admin": false, 00:19:51.103 "nvme_io": false, 00:19:51.103 "nvme_io_md": false, 00:19:51.103 "write_zeroes": true, 00:19:51.103 "zcopy": true, 00:19:51.103 "get_zone_info": false, 00:19:51.103 "zone_management": false, 00:19:51.103 "zone_append": false, 00:19:51.103 "compare": false, 00:19:51.103 "compare_and_write": false, 00:19:51.103 "abort": true, 00:19:51.103 "seek_hole": false, 00:19:51.103 "seek_data": false, 00:19:51.103 "copy": true, 00:19:51.103 "nvme_iov_md": false 00:19:51.103 }, 00:19:51.103 "memory_domains": [ 00:19:51.103 { 00:19:51.103 "dma_device_id": "system", 00:19:51.103 "dma_device_type": 1 00:19:51.103 }, 00:19:51.103 { 00:19:51.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:51.103 "dma_device_type": 2 00:19:51.103 } 00:19:51.103 ], 00:19:51.103 "driver_specific": {} 00:19:51.103 } 00:19:51.103 ] 00:19:51.103 18:18:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.103 18:18:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:51.103 18:18:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:19:51.103 18:18:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:51.103 18:18:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:51.103 18:18:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:51.103 18:18:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:51.103 18:18:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:51.103 18:18:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:51.103 18:18:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:51.103 18:18:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:51.103 18:18:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:51.103 18:18:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.103 18:18:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.103 18:18:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:51.103 18:18:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.103 18:18:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.103 18:18:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:51.103 "name": "Existed_Raid", 00:19:51.103 "uuid": "f9cc9a93-5e41-41ad-86c5-83c6bbd73817", 00:19:51.103 "strip_size_kb": 64, 00:19:51.103 "state": "online", 00:19:51.103 "raid_level": "raid5f", 00:19:51.104 "superblock": true, 00:19:51.104 "num_base_bdevs": 4, 00:19:51.104 "num_base_bdevs_discovered": 4, 00:19:51.104 "num_base_bdevs_operational": 4, 00:19:51.104 "base_bdevs_list": [ 00:19:51.104 { 00:19:51.104 "name": "NewBaseBdev", 00:19:51.104 "uuid": "67a115c3-95ba-45f4-bde6-b023ba129de9", 00:19:51.104 "is_configured": true, 00:19:51.104 "data_offset": 2048, 00:19:51.104 "data_size": 63488 00:19:51.104 }, 00:19:51.104 { 00:19:51.104 "name": "BaseBdev2", 00:19:51.104 "uuid": "97ced43d-86b4-4dc9-82d3-14130b8cdbb0", 00:19:51.104 "is_configured": true, 00:19:51.104 "data_offset": 2048, 00:19:51.104 "data_size": 63488 00:19:51.104 }, 00:19:51.104 { 00:19:51.104 "name": "BaseBdev3", 00:19:51.104 "uuid": "b77f4898-1588-4204-bec5-7943e8c4d583", 00:19:51.104 "is_configured": true, 00:19:51.104 "data_offset": 2048, 00:19:51.104 "data_size": 63488 00:19:51.104 }, 00:19:51.104 { 00:19:51.104 "name": "BaseBdev4", 00:19:51.104 "uuid": "a23452bd-3a7f-46f6-b9f0-26a589eb2ee5", 00:19:51.104 "is_configured": true, 00:19:51.104 "data_offset": 2048, 00:19:51.104 "data_size": 63488 00:19:51.104 } 00:19:51.104 ] 00:19:51.104 }' 00:19:51.104 18:18:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:51.104 18:18:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.724 18:18:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:19:51.724 18:18:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:51.724 18:18:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:51.724 18:18:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:51.724 18:18:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:19:51.724 18:18:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:51.724 18:18:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:51.724 18:18:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:51.724 18:18:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.724 18:18:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.724 [2024-12-06 18:18:16.998011] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:51.724 18:18:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.724 18:18:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:51.724 "name": "Existed_Raid", 00:19:51.724 "aliases": [ 00:19:51.724 "f9cc9a93-5e41-41ad-86c5-83c6bbd73817" 00:19:51.724 ], 00:19:51.724 "product_name": "Raid Volume", 00:19:51.724 "block_size": 512, 00:19:51.724 "num_blocks": 190464, 00:19:51.724 "uuid": "f9cc9a93-5e41-41ad-86c5-83c6bbd73817", 00:19:51.724 "assigned_rate_limits": { 00:19:51.724 "rw_ios_per_sec": 0, 00:19:51.724 "rw_mbytes_per_sec": 0, 00:19:51.724 "r_mbytes_per_sec": 0, 00:19:51.724 "w_mbytes_per_sec": 0 00:19:51.724 }, 00:19:51.724 "claimed": false, 00:19:51.724 "zoned": false, 00:19:51.724 "supported_io_types": { 00:19:51.724 "read": true, 00:19:51.724 "write": true, 00:19:51.724 "unmap": false, 00:19:51.724 "flush": false, 00:19:51.724 "reset": true, 00:19:51.724 "nvme_admin": false, 00:19:51.724 "nvme_io": false, 00:19:51.724 "nvme_io_md": false, 00:19:51.724 "write_zeroes": true, 00:19:51.724 "zcopy": false, 00:19:51.724 "get_zone_info": false, 00:19:51.725 "zone_management": false, 00:19:51.725 "zone_append": false, 00:19:51.725 "compare": false, 00:19:51.725 "compare_and_write": false, 00:19:51.725 "abort": false, 00:19:51.725 "seek_hole": false, 00:19:51.725 "seek_data": false, 00:19:51.725 "copy": false, 00:19:51.725 "nvme_iov_md": false 00:19:51.725 }, 00:19:51.725 "driver_specific": { 00:19:51.725 "raid": { 00:19:51.725 "uuid": "f9cc9a93-5e41-41ad-86c5-83c6bbd73817", 00:19:51.725 "strip_size_kb": 64, 00:19:51.725 "state": "online", 00:19:51.725 "raid_level": "raid5f", 00:19:51.725 "superblock": true, 00:19:51.725 "num_base_bdevs": 4, 00:19:51.725 "num_base_bdevs_discovered": 4, 00:19:51.725 "num_base_bdevs_operational": 4, 00:19:51.725 "base_bdevs_list": [ 00:19:51.725 { 00:19:51.725 "name": "NewBaseBdev", 00:19:51.725 "uuid": "67a115c3-95ba-45f4-bde6-b023ba129de9", 00:19:51.725 "is_configured": true, 00:19:51.725 "data_offset": 2048, 00:19:51.725 "data_size": 63488 00:19:51.725 }, 00:19:51.725 { 00:19:51.725 "name": "BaseBdev2", 00:19:51.725 "uuid": "97ced43d-86b4-4dc9-82d3-14130b8cdbb0", 00:19:51.725 "is_configured": true, 00:19:51.725 "data_offset": 2048, 00:19:51.725 "data_size": 63488 00:19:51.725 }, 00:19:51.725 { 00:19:51.725 "name": "BaseBdev3", 00:19:51.725 "uuid": "b77f4898-1588-4204-bec5-7943e8c4d583", 00:19:51.725 "is_configured": true, 00:19:51.725 "data_offset": 2048, 00:19:51.725 "data_size": 63488 00:19:51.725 }, 00:19:51.725 { 00:19:51.725 "name": "BaseBdev4", 00:19:51.725 "uuid": "a23452bd-3a7f-46f6-b9f0-26a589eb2ee5", 00:19:51.725 "is_configured": true, 00:19:51.725 "data_offset": 2048, 00:19:51.725 "data_size": 63488 00:19:51.725 } 00:19:51.725 ] 00:19:51.725 } 00:19:51.725 } 00:19:51.725 }' 00:19:51.725 18:18:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:51.725 18:18:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:19:51.725 BaseBdev2 00:19:51.725 BaseBdev3 00:19:51.725 BaseBdev4' 00:19:51.725 18:18:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:51.725 18:18:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:51.725 18:18:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:51.725 18:18:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:19:51.725 18:18:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.725 18:18:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:51.725 18:18:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.725 18:18:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.725 18:18:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:51.725 18:18:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:51.725 18:18:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:51.725 18:18:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:51.725 18:18:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:51.725 18:18:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.725 18:18:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.725 18:18:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.985 18:18:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:51.985 18:18:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:51.985 18:18:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:51.985 18:18:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:51.985 18:18:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:51.985 18:18:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.985 18:18:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.985 18:18:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.985 18:18:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:51.985 18:18:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:51.985 18:18:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:51.985 18:18:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:19:51.985 18:18:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.985 18:18:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.985 18:18:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:51.985 18:18:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.985 18:18:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:51.985 18:18:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:51.985 18:18:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:51.985 18:18:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.985 18:18:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.985 [2024-12-06 18:18:17.377850] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:51.985 [2024-12-06 18:18:17.377889] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:51.985 [2024-12-06 18:18:17.377995] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:51.985 [2024-12-06 18:18:17.378393] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:51.985 [2024-12-06 18:18:17.378412] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:19:51.985 18:18:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.985 18:18:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83881 00:19:51.985 18:18:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83881 ']' 00:19:51.985 18:18:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83881 00:19:51.985 18:18:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:19:51.985 18:18:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:51.985 18:18:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83881 00:19:51.985 18:18:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:51.985 18:18:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:51.985 killing process with pid 83881 00:19:51.985 18:18:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83881' 00:19:51.985 18:18:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83881 00:19:51.985 [2024-12-06 18:18:17.420097] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:51.985 18:18:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83881 00:19:52.554 [2024-12-06 18:18:17.770816] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:53.490 18:18:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:19:53.490 00:19:53.490 real 0m12.980s 00:19:53.490 user 0m21.543s 00:19:53.490 sys 0m1.829s 00:19:53.490 18:18:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:53.490 ************************************ 00:19:53.490 END TEST raid5f_state_function_test_sb 00:19:53.490 ************************************ 00:19:53.490 18:18:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.490 18:18:18 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:19:53.490 18:18:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:53.490 18:18:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:53.490 18:18:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:53.490 ************************************ 00:19:53.490 START TEST raid5f_superblock_test 00:19:53.490 ************************************ 00:19:53.490 18:18:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:19:53.490 18:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:19:53.490 18:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:19:53.490 18:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:53.490 18:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:53.490 18:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:53.490 18:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:53.490 18:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:53.490 18:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:53.490 18:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:53.491 18:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:53.491 18:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:53.491 18:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:53.491 18:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:53.491 18:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:19:53.491 18:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:19:53.491 18:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:19:53.491 18:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84563 00:19:53.491 18:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84563 00:19:53.491 18:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:53.491 18:18:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84563 ']' 00:19:53.491 18:18:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.491 18:18:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:53.491 18:18:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.491 18:18:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:53.491 18:18:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:53.491 [2024-12-06 18:18:19.001875] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:19:53.491 [2024-12-06 18:18:19.002067] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84563 ] 00:19:53.749 [2024-12-06 18:18:19.188614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.007 [2024-12-06 18:18:19.327970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.265 [2024-12-06 18:18:19.536555] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:54.265 [2024-12-06 18:18:19.536609] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:54.523 18:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:54.523 18:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:19:54.523 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:54.523 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:54.523 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:54.523 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:54.523 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:54.523 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:54.523 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:54.523 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:54.523 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:19:54.523 18:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.523 18:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.781 malloc1 00:19:54.781 18:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.781 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:54.781 18:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.781 18:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.781 [2024-12-06 18:18:20.075098] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:54.781 [2024-12-06 18:18:20.075319] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:54.782 [2024-12-06 18:18:20.075405] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:54.782 [2024-12-06 18:18:20.075531] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:54.782 [2024-12-06 18:18:20.078393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:54.782 [2024-12-06 18:18:20.078563] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:54.782 pt1 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.782 malloc2 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.782 [2024-12-06 18:18:20.126071] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:54.782 [2024-12-06 18:18:20.126139] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:54.782 [2024-12-06 18:18:20.126174] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:54.782 [2024-12-06 18:18:20.126188] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:54.782 [2024-12-06 18:18:20.129061] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:54.782 [2024-12-06 18:18:20.129105] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:54.782 pt2 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.782 malloc3 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.782 [2024-12-06 18:18:20.193752] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:54.782 [2024-12-06 18:18:20.193850] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:54.782 [2024-12-06 18:18:20.193893] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:54.782 [2024-12-06 18:18:20.193908] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:54.782 [2024-12-06 18:18:20.196732] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:54.782 [2024-12-06 18:18:20.196805] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:54.782 pt3 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.782 malloc4 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.782 [2024-12-06 18:18:20.251630] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:54.782 [2024-12-06 18:18:20.251841] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:54.782 [2024-12-06 18:18:20.251886] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:54.782 [2024-12-06 18:18:20.251901] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:54.782 [2024-12-06 18:18:20.254617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:54.782 [2024-12-06 18:18:20.254664] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:54.782 pt4 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.782 [2024-12-06 18:18:20.259668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:54.782 [2024-12-06 18:18:20.262132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:54.782 [2024-12-06 18:18:20.262251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:54.782 [2024-12-06 18:18:20.262323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:54.782 [2024-12-06 18:18:20.262592] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:54.782 [2024-12-06 18:18:20.262614] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:54.782 [2024-12-06 18:18:20.262935] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:54.782 [2024-12-06 18:18:20.269889] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:54.782 [2024-12-06 18:18:20.270041] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:54.782 [2024-12-06 18:18:20.270474] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.782 18:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.040 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:55.040 "name": "raid_bdev1", 00:19:55.040 "uuid": "8bbc9d4a-79eb-4e26-9601-82daeaafde94", 00:19:55.040 "strip_size_kb": 64, 00:19:55.040 "state": "online", 00:19:55.040 "raid_level": "raid5f", 00:19:55.040 "superblock": true, 00:19:55.040 "num_base_bdevs": 4, 00:19:55.040 "num_base_bdevs_discovered": 4, 00:19:55.040 "num_base_bdevs_operational": 4, 00:19:55.040 "base_bdevs_list": [ 00:19:55.040 { 00:19:55.040 "name": "pt1", 00:19:55.040 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:55.040 "is_configured": true, 00:19:55.040 "data_offset": 2048, 00:19:55.040 "data_size": 63488 00:19:55.040 }, 00:19:55.040 { 00:19:55.040 "name": "pt2", 00:19:55.040 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:55.040 "is_configured": true, 00:19:55.040 "data_offset": 2048, 00:19:55.040 "data_size": 63488 00:19:55.040 }, 00:19:55.040 { 00:19:55.040 "name": "pt3", 00:19:55.040 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:55.040 "is_configured": true, 00:19:55.040 "data_offset": 2048, 00:19:55.040 "data_size": 63488 00:19:55.040 }, 00:19:55.040 { 00:19:55.040 "name": "pt4", 00:19:55.040 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:55.040 "is_configured": true, 00:19:55.040 "data_offset": 2048, 00:19:55.040 "data_size": 63488 00:19:55.040 } 00:19:55.040 ] 00:19:55.040 }' 00:19:55.040 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:55.040 18:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.297 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:55.297 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:55.297 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:55.297 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:55.297 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:55.297 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:55.555 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:55.555 18:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.555 18:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.555 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:55.555 [2024-12-06 18:18:20.822913] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:55.555 18:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.555 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:55.555 "name": "raid_bdev1", 00:19:55.555 "aliases": [ 00:19:55.555 "8bbc9d4a-79eb-4e26-9601-82daeaafde94" 00:19:55.555 ], 00:19:55.555 "product_name": "Raid Volume", 00:19:55.555 "block_size": 512, 00:19:55.555 "num_blocks": 190464, 00:19:55.555 "uuid": "8bbc9d4a-79eb-4e26-9601-82daeaafde94", 00:19:55.555 "assigned_rate_limits": { 00:19:55.555 "rw_ios_per_sec": 0, 00:19:55.555 "rw_mbytes_per_sec": 0, 00:19:55.555 "r_mbytes_per_sec": 0, 00:19:55.555 "w_mbytes_per_sec": 0 00:19:55.555 }, 00:19:55.555 "claimed": false, 00:19:55.555 "zoned": false, 00:19:55.555 "supported_io_types": { 00:19:55.555 "read": true, 00:19:55.555 "write": true, 00:19:55.555 "unmap": false, 00:19:55.555 "flush": false, 00:19:55.555 "reset": true, 00:19:55.555 "nvme_admin": false, 00:19:55.555 "nvme_io": false, 00:19:55.555 "nvme_io_md": false, 00:19:55.555 "write_zeroes": true, 00:19:55.555 "zcopy": false, 00:19:55.555 "get_zone_info": false, 00:19:55.555 "zone_management": false, 00:19:55.555 "zone_append": false, 00:19:55.555 "compare": false, 00:19:55.555 "compare_and_write": false, 00:19:55.555 "abort": false, 00:19:55.555 "seek_hole": false, 00:19:55.555 "seek_data": false, 00:19:55.555 "copy": false, 00:19:55.555 "nvme_iov_md": false 00:19:55.555 }, 00:19:55.555 "driver_specific": { 00:19:55.555 "raid": { 00:19:55.555 "uuid": "8bbc9d4a-79eb-4e26-9601-82daeaafde94", 00:19:55.555 "strip_size_kb": 64, 00:19:55.555 "state": "online", 00:19:55.555 "raid_level": "raid5f", 00:19:55.555 "superblock": true, 00:19:55.555 "num_base_bdevs": 4, 00:19:55.555 "num_base_bdevs_discovered": 4, 00:19:55.555 "num_base_bdevs_operational": 4, 00:19:55.555 "base_bdevs_list": [ 00:19:55.555 { 00:19:55.555 "name": "pt1", 00:19:55.555 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:55.555 "is_configured": true, 00:19:55.555 "data_offset": 2048, 00:19:55.555 "data_size": 63488 00:19:55.555 }, 00:19:55.555 { 00:19:55.555 "name": "pt2", 00:19:55.555 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:55.555 "is_configured": true, 00:19:55.555 "data_offset": 2048, 00:19:55.555 "data_size": 63488 00:19:55.555 }, 00:19:55.555 { 00:19:55.555 "name": "pt3", 00:19:55.555 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:55.555 "is_configured": true, 00:19:55.555 "data_offset": 2048, 00:19:55.555 "data_size": 63488 00:19:55.555 }, 00:19:55.555 { 00:19:55.555 "name": "pt4", 00:19:55.555 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:55.555 "is_configured": true, 00:19:55.555 "data_offset": 2048, 00:19:55.555 "data_size": 63488 00:19:55.555 } 00:19:55.556 ] 00:19:55.556 } 00:19:55.556 } 00:19:55.556 }' 00:19:55.556 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:55.556 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:55.556 pt2 00:19:55.556 pt3 00:19:55.556 pt4' 00:19:55.556 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:55.556 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:55.556 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:55.556 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:55.556 18:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.556 18:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.556 18:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:55.556 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.556 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:55.556 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:55.556 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:55.556 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:55.556 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.556 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.556 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:55.556 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.814 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:55.814 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:55.814 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:55.814 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:19:55.814 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.814 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:55.814 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.814 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.814 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:55.814 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:55.815 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:55.815 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:19:55.815 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:55.815 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.815 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.815 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.815 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:55.815 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:55.815 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:55.815 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:55.815 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.815 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.815 [2024-12-06 18:18:21.218997] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:55.815 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.815 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8bbc9d4a-79eb-4e26-9601-82daeaafde94 00:19:55.815 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 8bbc9d4a-79eb-4e26-9601-82daeaafde94 ']' 00:19:55.815 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:55.815 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.815 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.815 [2024-12-06 18:18:21.270779] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:55.815 [2024-12-06 18:18:21.270808] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:55.815 [2024-12-06 18:18:21.270913] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:55.815 [2024-12-06 18:18:21.271031] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:55.815 [2024-12-06 18:18:21.271054] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:55.815 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.815 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:55.815 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.815 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.815 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.815 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.815 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:55.815 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:55.815 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:55.815 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:55.815 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.815 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.073 [2024-12-06 18:18:21.426846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:56.073 [2024-12-06 18:18:21.429362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:56.073 [2024-12-06 18:18:21.429587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:56.073 [2024-12-06 18:18:21.429657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:19:56.073 [2024-12-06 18:18:21.429729] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:56.073 [2024-12-06 18:18:21.429813] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:56.073 [2024-12-06 18:18:21.429847] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:19:56.073 [2024-12-06 18:18:21.429875] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:19:56.073 [2024-12-06 18:18:21.429896] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:56.073 [2024-12-06 18:18:21.429912] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:56.073 request: 00:19:56.073 { 00:19:56.073 "name": "raid_bdev1", 00:19:56.073 "raid_level": "raid5f", 00:19:56.073 "base_bdevs": [ 00:19:56.073 "malloc1", 00:19:56.073 "malloc2", 00:19:56.073 "malloc3", 00:19:56.073 "malloc4" 00:19:56.073 ], 00:19:56.073 "strip_size_kb": 64, 00:19:56.073 "superblock": false, 00:19:56.073 "method": "bdev_raid_create", 00:19:56.073 "req_id": 1 00:19:56.073 } 00:19:56.073 Got JSON-RPC error response 00:19:56.073 response: 00:19:56.073 { 00:19:56.073 "code": -17, 00:19:56.073 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:56.073 } 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.073 [2024-12-06 18:18:21.494893] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:56.073 [2024-12-06 18:18:21.494974] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:56.073 [2024-12-06 18:18:21.495011] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:56.073 [2024-12-06 18:18:21.495028] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:56.073 [2024-12-06 18:18:21.497980] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:56.073 [2024-12-06 18:18:21.498032] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:56.073 [2024-12-06 18:18:21.498145] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:56.073 [2024-12-06 18:18:21.498216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:56.073 pt1 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:56.073 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:56.074 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.074 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.074 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.074 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.074 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.074 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:56.074 "name": "raid_bdev1", 00:19:56.074 "uuid": "8bbc9d4a-79eb-4e26-9601-82daeaafde94", 00:19:56.074 "strip_size_kb": 64, 00:19:56.074 "state": "configuring", 00:19:56.074 "raid_level": "raid5f", 00:19:56.074 "superblock": true, 00:19:56.074 "num_base_bdevs": 4, 00:19:56.074 "num_base_bdevs_discovered": 1, 00:19:56.074 "num_base_bdevs_operational": 4, 00:19:56.074 "base_bdevs_list": [ 00:19:56.074 { 00:19:56.074 "name": "pt1", 00:19:56.074 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:56.074 "is_configured": true, 00:19:56.074 "data_offset": 2048, 00:19:56.074 "data_size": 63488 00:19:56.074 }, 00:19:56.074 { 00:19:56.074 "name": null, 00:19:56.074 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:56.074 "is_configured": false, 00:19:56.074 "data_offset": 2048, 00:19:56.074 "data_size": 63488 00:19:56.074 }, 00:19:56.074 { 00:19:56.074 "name": null, 00:19:56.074 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:56.074 "is_configured": false, 00:19:56.074 "data_offset": 2048, 00:19:56.074 "data_size": 63488 00:19:56.074 }, 00:19:56.074 { 00:19:56.074 "name": null, 00:19:56.074 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:56.074 "is_configured": false, 00:19:56.074 "data_offset": 2048, 00:19:56.074 "data_size": 63488 00:19:56.074 } 00:19:56.074 ] 00:19:56.074 }' 00:19:56.074 18:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:56.074 18:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.641 18:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:19:56.641 18:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:56.641 18:18:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.641 18:18:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.641 [2024-12-06 18:18:22.063096] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:56.641 [2024-12-06 18:18:22.063317] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:56.641 [2024-12-06 18:18:22.063356] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:56.641 [2024-12-06 18:18:22.063374] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:56.641 [2024-12-06 18:18:22.063945] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:56.641 [2024-12-06 18:18:22.063992] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:56.641 [2024-12-06 18:18:22.064094] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:56.641 [2024-12-06 18:18:22.064131] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:56.641 pt2 00:19:56.641 18:18:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.641 18:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:19:56.641 18:18:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.641 18:18:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.641 [2024-12-06 18:18:22.071058] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:56.641 18:18:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.641 18:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:19:56.641 18:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:56.641 18:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:56.641 18:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:56.641 18:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:56.641 18:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:56.641 18:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:56.641 18:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:56.641 18:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:56.641 18:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:56.641 18:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.641 18:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.641 18:18:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.641 18:18:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.641 18:18:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.641 18:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:56.641 "name": "raid_bdev1", 00:19:56.641 "uuid": "8bbc9d4a-79eb-4e26-9601-82daeaafde94", 00:19:56.641 "strip_size_kb": 64, 00:19:56.641 "state": "configuring", 00:19:56.641 "raid_level": "raid5f", 00:19:56.641 "superblock": true, 00:19:56.641 "num_base_bdevs": 4, 00:19:56.641 "num_base_bdevs_discovered": 1, 00:19:56.641 "num_base_bdevs_operational": 4, 00:19:56.641 "base_bdevs_list": [ 00:19:56.641 { 00:19:56.641 "name": "pt1", 00:19:56.641 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:56.641 "is_configured": true, 00:19:56.641 "data_offset": 2048, 00:19:56.641 "data_size": 63488 00:19:56.641 }, 00:19:56.641 { 00:19:56.641 "name": null, 00:19:56.641 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:56.641 "is_configured": false, 00:19:56.641 "data_offset": 0, 00:19:56.641 "data_size": 63488 00:19:56.641 }, 00:19:56.641 { 00:19:56.641 "name": null, 00:19:56.641 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:56.641 "is_configured": false, 00:19:56.641 "data_offset": 2048, 00:19:56.641 "data_size": 63488 00:19:56.641 }, 00:19:56.641 { 00:19:56.641 "name": null, 00:19:56.641 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:56.641 "is_configured": false, 00:19:56.641 "data_offset": 2048, 00:19:56.641 "data_size": 63488 00:19:56.641 } 00:19:56.641 ] 00:19:56.641 }' 00:19:56.641 18:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:56.641 18:18:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.208 18:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:57.208 18:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:57.208 18:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:57.208 18:18:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.208 18:18:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.208 [2024-12-06 18:18:22.599232] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:57.208 [2024-12-06 18:18:22.599456] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:57.208 [2024-12-06 18:18:22.599498] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:57.208 [2024-12-06 18:18:22.599513] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:57.208 [2024-12-06 18:18:22.600121] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:57.208 [2024-12-06 18:18:22.600156] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:57.208 [2024-12-06 18:18:22.600267] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:57.208 [2024-12-06 18:18:22.600311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:57.208 pt2 00:19:57.208 18:18:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.208 18:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:57.208 18:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:57.208 18:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:57.208 18:18:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.208 18:18:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.208 [2024-12-06 18:18:22.607244] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:57.208 [2024-12-06 18:18:22.607335] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:57.208 [2024-12-06 18:18:22.607392] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:57.208 [2024-12-06 18:18:22.607422] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:57.208 [2024-12-06 18:18:22.608105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:57.208 [2024-12-06 18:18:22.608153] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:57.208 [2024-12-06 18:18:22.608279] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:57.208 [2024-12-06 18:18:22.608334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:57.208 pt3 00:19:57.208 18:18:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.208 18:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:57.208 18:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:57.208 18:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:57.208 18:18:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.208 18:18:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.208 [2024-12-06 18:18:22.615217] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:57.208 [2024-12-06 18:18:22.615290] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:57.208 [2024-12-06 18:18:22.615335] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:57.208 [2024-12-06 18:18:22.615359] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:57.208 [2024-12-06 18:18:22.615909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:57.208 [2024-12-06 18:18:22.615970] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:57.208 [2024-12-06 18:18:22.616058] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:19:57.208 [2024-12-06 18:18:22.616100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:57.208 [2024-12-06 18:18:22.616313] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:57.208 [2024-12-06 18:18:22.616329] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:57.208 [2024-12-06 18:18:22.616662] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:57.208 [2024-12-06 18:18:22.623399] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:57.208 [2024-12-06 18:18:22.623442] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:57.208 [2024-12-06 18:18:22.623691] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:57.208 pt4 00:19:57.208 18:18:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.208 18:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:57.208 18:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:57.208 18:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:57.208 18:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:57.208 18:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:57.208 18:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:57.208 18:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:57.208 18:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:57.208 18:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:57.208 18:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:57.208 18:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:57.208 18:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:57.208 18:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.208 18:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.208 18:18:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.208 18:18:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.208 18:18:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.208 18:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:57.208 "name": "raid_bdev1", 00:19:57.208 "uuid": "8bbc9d4a-79eb-4e26-9601-82daeaafde94", 00:19:57.208 "strip_size_kb": 64, 00:19:57.208 "state": "online", 00:19:57.208 "raid_level": "raid5f", 00:19:57.208 "superblock": true, 00:19:57.208 "num_base_bdevs": 4, 00:19:57.208 "num_base_bdevs_discovered": 4, 00:19:57.208 "num_base_bdevs_operational": 4, 00:19:57.208 "base_bdevs_list": [ 00:19:57.208 { 00:19:57.208 "name": "pt1", 00:19:57.208 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:57.208 "is_configured": true, 00:19:57.208 "data_offset": 2048, 00:19:57.208 "data_size": 63488 00:19:57.208 }, 00:19:57.208 { 00:19:57.208 "name": "pt2", 00:19:57.208 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:57.208 "is_configured": true, 00:19:57.208 "data_offset": 2048, 00:19:57.208 "data_size": 63488 00:19:57.208 }, 00:19:57.208 { 00:19:57.208 "name": "pt3", 00:19:57.208 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:57.208 "is_configured": true, 00:19:57.208 "data_offset": 2048, 00:19:57.208 "data_size": 63488 00:19:57.208 }, 00:19:57.208 { 00:19:57.208 "name": "pt4", 00:19:57.208 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:57.208 "is_configured": true, 00:19:57.208 "data_offset": 2048, 00:19:57.208 "data_size": 63488 00:19:57.208 } 00:19:57.208 ] 00:19:57.208 }' 00:19:57.208 18:18:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:57.208 18:18:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.774 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:57.774 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:57.774 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:57.774 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:57.774 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:57.774 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:57.774 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:57.774 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:57.774 18:18:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.774 18:18:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.774 [2024-12-06 18:18:23.160506] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:57.774 18:18:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.774 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:57.774 "name": "raid_bdev1", 00:19:57.774 "aliases": [ 00:19:57.774 "8bbc9d4a-79eb-4e26-9601-82daeaafde94" 00:19:57.774 ], 00:19:57.774 "product_name": "Raid Volume", 00:19:57.774 "block_size": 512, 00:19:57.774 "num_blocks": 190464, 00:19:57.774 "uuid": "8bbc9d4a-79eb-4e26-9601-82daeaafde94", 00:19:57.774 "assigned_rate_limits": { 00:19:57.774 "rw_ios_per_sec": 0, 00:19:57.774 "rw_mbytes_per_sec": 0, 00:19:57.774 "r_mbytes_per_sec": 0, 00:19:57.774 "w_mbytes_per_sec": 0 00:19:57.774 }, 00:19:57.774 "claimed": false, 00:19:57.774 "zoned": false, 00:19:57.774 "supported_io_types": { 00:19:57.774 "read": true, 00:19:57.774 "write": true, 00:19:57.774 "unmap": false, 00:19:57.774 "flush": false, 00:19:57.774 "reset": true, 00:19:57.774 "nvme_admin": false, 00:19:57.774 "nvme_io": false, 00:19:57.774 "nvme_io_md": false, 00:19:57.774 "write_zeroes": true, 00:19:57.774 "zcopy": false, 00:19:57.774 "get_zone_info": false, 00:19:57.774 "zone_management": false, 00:19:57.774 "zone_append": false, 00:19:57.774 "compare": false, 00:19:57.774 "compare_and_write": false, 00:19:57.774 "abort": false, 00:19:57.774 "seek_hole": false, 00:19:57.774 "seek_data": false, 00:19:57.774 "copy": false, 00:19:57.774 "nvme_iov_md": false 00:19:57.774 }, 00:19:57.774 "driver_specific": { 00:19:57.774 "raid": { 00:19:57.774 "uuid": "8bbc9d4a-79eb-4e26-9601-82daeaafde94", 00:19:57.774 "strip_size_kb": 64, 00:19:57.774 "state": "online", 00:19:57.774 "raid_level": "raid5f", 00:19:57.774 "superblock": true, 00:19:57.774 "num_base_bdevs": 4, 00:19:57.774 "num_base_bdevs_discovered": 4, 00:19:57.774 "num_base_bdevs_operational": 4, 00:19:57.774 "base_bdevs_list": [ 00:19:57.774 { 00:19:57.774 "name": "pt1", 00:19:57.774 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:57.774 "is_configured": true, 00:19:57.774 "data_offset": 2048, 00:19:57.774 "data_size": 63488 00:19:57.774 }, 00:19:57.774 { 00:19:57.774 "name": "pt2", 00:19:57.774 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:57.774 "is_configured": true, 00:19:57.774 "data_offset": 2048, 00:19:57.774 "data_size": 63488 00:19:57.774 }, 00:19:57.774 { 00:19:57.774 "name": "pt3", 00:19:57.774 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:57.774 "is_configured": true, 00:19:57.774 "data_offset": 2048, 00:19:57.774 "data_size": 63488 00:19:57.774 }, 00:19:57.774 { 00:19:57.774 "name": "pt4", 00:19:57.774 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:57.774 "is_configured": true, 00:19:57.774 "data_offset": 2048, 00:19:57.774 "data_size": 63488 00:19:57.774 } 00:19:57.774 ] 00:19:57.774 } 00:19:57.774 } 00:19:57.774 }' 00:19:57.774 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:57.774 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:57.774 pt2 00:19:57.774 pt3 00:19:57.774 pt4' 00:19:57.774 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:58.032 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:58.032 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:58.032 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:58.032 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:58.032 18:18:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.032 18:18:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.032 18:18:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.032 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:58.032 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:58.032 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:58.032 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:58.032 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:58.032 18:18:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.032 18:18:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.032 18:18:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.032 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:58.032 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:58.032 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:58.032 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:19:58.032 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:58.032 18:18:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.032 18:18:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.032 18:18:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.032 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:58.032 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:58.032 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:58.032 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:19:58.032 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:58.032 18:18:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.032 18:18:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.032 18:18:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.033 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:58.033 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:58.033 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:58.033 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:58.033 18:18:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.033 18:18:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.033 [2024-12-06 18:18:23.536651] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:58.291 18:18:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.291 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 8bbc9d4a-79eb-4e26-9601-82daeaafde94 '!=' 8bbc9d4a-79eb-4e26-9601-82daeaafde94 ']' 00:19:58.291 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:19:58.291 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:58.291 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:19:58.291 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:58.291 18:18:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.291 18:18:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.291 [2024-12-06 18:18:23.588490] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:58.291 18:18:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.291 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:58.292 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:58.292 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:58.292 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:58.292 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:58.292 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:58.292 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:58.292 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:58.292 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:58.292 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:58.292 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.292 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.292 18:18:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.292 18:18:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.292 18:18:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.292 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:58.292 "name": "raid_bdev1", 00:19:58.292 "uuid": "8bbc9d4a-79eb-4e26-9601-82daeaafde94", 00:19:58.292 "strip_size_kb": 64, 00:19:58.292 "state": "online", 00:19:58.292 "raid_level": "raid5f", 00:19:58.292 "superblock": true, 00:19:58.292 "num_base_bdevs": 4, 00:19:58.292 "num_base_bdevs_discovered": 3, 00:19:58.292 "num_base_bdevs_operational": 3, 00:19:58.292 "base_bdevs_list": [ 00:19:58.292 { 00:19:58.292 "name": null, 00:19:58.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.292 "is_configured": false, 00:19:58.292 "data_offset": 0, 00:19:58.292 "data_size": 63488 00:19:58.292 }, 00:19:58.292 { 00:19:58.292 "name": "pt2", 00:19:58.292 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:58.292 "is_configured": true, 00:19:58.292 "data_offset": 2048, 00:19:58.292 "data_size": 63488 00:19:58.292 }, 00:19:58.292 { 00:19:58.292 "name": "pt3", 00:19:58.292 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:58.292 "is_configured": true, 00:19:58.292 "data_offset": 2048, 00:19:58.292 "data_size": 63488 00:19:58.292 }, 00:19:58.292 { 00:19:58.292 "name": "pt4", 00:19:58.292 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:58.292 "is_configured": true, 00:19:58.292 "data_offset": 2048, 00:19:58.292 "data_size": 63488 00:19:58.292 } 00:19:58.292 ] 00:19:58.292 }' 00:19:58.292 18:18:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:58.292 18:18:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.859 18:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:58.859 18:18:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.859 18:18:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.859 [2024-12-06 18:18:24.116711] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:58.859 [2024-12-06 18:18:24.116814] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:58.859 [2024-12-06 18:18:24.116953] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:58.859 [2024-12-06 18:18:24.117110] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:58.859 [2024-12-06 18:18:24.117132] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:58.859 18:18:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.859 18:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.859 18:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:58.859 18:18:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.859 18:18:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.859 18:18:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.859 18:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:58.859 18:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:58.859 18:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:58.859 18:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:58.859 18:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:58.859 18:18:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.859 18:18:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.859 18:18:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.859 18:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:58.859 18:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:58.859 18:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:19:58.859 18:18:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.859 18:18:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.859 18:18:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.859 18:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:58.860 18:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:58.860 18:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:19:58.860 18:18:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.860 18:18:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.860 18:18:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.860 18:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:58.860 18:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:58.860 18:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:58.860 18:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:58.860 18:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:58.860 18:18:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.860 18:18:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.860 [2024-12-06 18:18:24.208650] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:58.860 [2024-12-06 18:18:24.208759] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:58.860 [2024-12-06 18:18:24.208816] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:19:58.860 [2024-12-06 18:18:24.208835] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:58.860 [2024-12-06 18:18:24.212784] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:58.860 [2024-12-06 18:18:24.212835] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:58.860 [2024-12-06 18:18:24.212968] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:58.860 [2024-12-06 18:18:24.213070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:58.860 pt2 00:19:58.860 18:18:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.860 18:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:19:58.860 18:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:58.860 18:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:58.860 18:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:58.860 18:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:58.860 18:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:58.860 18:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:58.860 18:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:58.860 18:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:58.860 18:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:58.860 18:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.860 18:18:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.860 18:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.860 18:18:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.860 18:18:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.860 18:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:58.860 "name": "raid_bdev1", 00:19:58.860 "uuid": "8bbc9d4a-79eb-4e26-9601-82daeaafde94", 00:19:58.860 "strip_size_kb": 64, 00:19:58.860 "state": "configuring", 00:19:58.860 "raid_level": "raid5f", 00:19:58.860 "superblock": true, 00:19:58.860 "num_base_bdevs": 4, 00:19:58.860 "num_base_bdevs_discovered": 1, 00:19:58.860 "num_base_bdevs_operational": 3, 00:19:58.860 "base_bdevs_list": [ 00:19:58.860 { 00:19:58.860 "name": null, 00:19:58.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.860 "is_configured": false, 00:19:58.860 "data_offset": 2048, 00:19:58.860 "data_size": 63488 00:19:58.860 }, 00:19:58.860 { 00:19:58.860 "name": "pt2", 00:19:58.860 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:58.860 "is_configured": true, 00:19:58.860 "data_offset": 2048, 00:19:58.860 "data_size": 63488 00:19:58.860 }, 00:19:58.860 { 00:19:58.860 "name": null, 00:19:58.860 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:58.860 "is_configured": false, 00:19:58.860 "data_offset": 2048, 00:19:58.860 "data_size": 63488 00:19:58.860 }, 00:19:58.860 { 00:19:58.860 "name": null, 00:19:58.860 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:58.860 "is_configured": false, 00:19:58.860 "data_offset": 2048, 00:19:58.860 "data_size": 63488 00:19:58.860 } 00:19:58.860 ] 00:19:58.860 }' 00:19:58.860 18:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:58.860 18:18:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.427 18:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:19:59.427 18:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:59.427 18:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:59.427 18:18:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.427 18:18:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.427 [2024-12-06 18:18:24.749256] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:59.427 [2024-12-06 18:18:24.749550] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:59.427 [2024-12-06 18:18:24.749600] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:19:59.427 [2024-12-06 18:18:24.749617] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:59.427 [2024-12-06 18:18:24.750229] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:59.427 [2024-12-06 18:18:24.750263] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:59.427 [2024-12-06 18:18:24.750380] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:59.427 [2024-12-06 18:18:24.750413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:59.427 pt3 00:19:59.427 18:18:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.427 18:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:19:59.427 18:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:59.427 18:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:59.427 18:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:59.427 18:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:59.427 18:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:59.427 18:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:59.427 18:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:59.427 18:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:59.427 18:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:59.427 18:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:59.427 18:18:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.427 18:18:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.427 18:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.427 18:18:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.427 18:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:59.427 "name": "raid_bdev1", 00:19:59.427 "uuid": "8bbc9d4a-79eb-4e26-9601-82daeaafde94", 00:19:59.427 "strip_size_kb": 64, 00:19:59.427 "state": "configuring", 00:19:59.427 "raid_level": "raid5f", 00:19:59.427 "superblock": true, 00:19:59.427 "num_base_bdevs": 4, 00:19:59.427 "num_base_bdevs_discovered": 2, 00:19:59.427 "num_base_bdevs_operational": 3, 00:19:59.427 "base_bdevs_list": [ 00:19:59.427 { 00:19:59.427 "name": null, 00:19:59.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.427 "is_configured": false, 00:19:59.427 "data_offset": 2048, 00:19:59.427 "data_size": 63488 00:19:59.427 }, 00:19:59.427 { 00:19:59.427 "name": "pt2", 00:19:59.427 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:59.427 "is_configured": true, 00:19:59.427 "data_offset": 2048, 00:19:59.427 "data_size": 63488 00:19:59.427 }, 00:19:59.427 { 00:19:59.427 "name": "pt3", 00:19:59.427 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:59.427 "is_configured": true, 00:19:59.427 "data_offset": 2048, 00:19:59.427 "data_size": 63488 00:19:59.427 }, 00:19:59.427 { 00:19:59.427 "name": null, 00:19:59.427 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:59.427 "is_configured": false, 00:19:59.427 "data_offset": 2048, 00:19:59.427 "data_size": 63488 00:19:59.427 } 00:19:59.427 ] 00:19:59.427 }' 00:19:59.427 18:18:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:59.427 18:18:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.995 18:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:19:59.995 18:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:59.995 18:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:19:59.995 18:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:59.995 18:18:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.995 18:18:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.995 [2024-12-06 18:18:25.281521] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:59.995 [2024-12-06 18:18:25.281743] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:59.995 [2024-12-06 18:18:25.281917] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:19:59.995 [2024-12-06 18:18:25.282044] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:59.995 [2024-12-06 18:18:25.282643] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:59.995 [2024-12-06 18:18:25.282669] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:59.995 [2024-12-06 18:18:25.282796] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:19:59.995 [2024-12-06 18:18:25.282839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:59.995 [2024-12-06 18:18:25.283033] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:59.995 [2024-12-06 18:18:25.283049] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:59.995 [2024-12-06 18:18:25.283362] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:59.995 [2024-12-06 18:18:25.289918] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:59.995 [2024-12-06 18:18:25.289950] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:59.995 [2024-12-06 18:18:25.290381] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:59.995 pt4 00:19:59.995 18:18:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.995 18:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:59.995 18:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:59.995 18:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:59.995 18:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:59.995 18:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:59.995 18:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:59.995 18:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:59.995 18:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:59.995 18:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:59.995 18:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:59.995 18:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:59.995 18:18:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.995 18:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.995 18:18:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.995 18:18:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.995 18:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:59.995 "name": "raid_bdev1", 00:19:59.995 "uuid": "8bbc9d4a-79eb-4e26-9601-82daeaafde94", 00:19:59.995 "strip_size_kb": 64, 00:19:59.995 "state": "online", 00:19:59.995 "raid_level": "raid5f", 00:19:59.995 "superblock": true, 00:19:59.995 "num_base_bdevs": 4, 00:19:59.995 "num_base_bdevs_discovered": 3, 00:19:59.995 "num_base_bdevs_operational": 3, 00:19:59.995 "base_bdevs_list": [ 00:19:59.995 { 00:19:59.995 "name": null, 00:19:59.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.995 "is_configured": false, 00:19:59.995 "data_offset": 2048, 00:19:59.995 "data_size": 63488 00:19:59.995 }, 00:19:59.995 { 00:19:59.995 "name": "pt2", 00:19:59.995 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:59.995 "is_configured": true, 00:19:59.995 "data_offset": 2048, 00:19:59.995 "data_size": 63488 00:19:59.995 }, 00:19:59.995 { 00:19:59.995 "name": "pt3", 00:19:59.995 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:59.995 "is_configured": true, 00:19:59.995 "data_offset": 2048, 00:19:59.995 "data_size": 63488 00:19:59.995 }, 00:19:59.995 { 00:19:59.995 "name": "pt4", 00:19:59.995 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:59.995 "is_configured": true, 00:19:59.995 "data_offset": 2048, 00:19:59.995 "data_size": 63488 00:19:59.995 } 00:19:59.995 ] 00:19:59.995 }' 00:19:59.995 18:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:59.995 18:18:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.561 18:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:00.561 18:18:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.561 18:18:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.561 [2024-12-06 18:18:25.829951] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:00.561 [2024-12-06 18:18:25.829986] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:00.561 [2024-12-06 18:18:25.830091] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:00.561 [2024-12-06 18:18:25.830225] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:00.561 [2024-12-06 18:18:25.830245] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:00.561 18:18:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.561 18:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.561 18:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:20:00.561 18:18:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.561 18:18:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.561 18:18:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.561 18:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:20:00.561 18:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:20:00.561 18:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:20:00.561 18:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:20:00.561 18:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:20:00.561 18:18:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.561 18:18:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.561 18:18:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.561 18:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:00.561 18:18:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.561 18:18:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.561 [2024-12-06 18:18:25.897958] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:00.561 [2024-12-06 18:18:25.898037] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:00.561 [2024-12-06 18:18:25.898073] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:20:00.561 [2024-12-06 18:18:25.898093] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:00.561 [2024-12-06 18:18:25.900991] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:00.561 [2024-12-06 18:18:25.901043] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:00.561 [2024-12-06 18:18:25.901149] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:00.561 [2024-12-06 18:18:25.901214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:00.561 [2024-12-06 18:18:25.901383] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:00.561 [2024-12-06 18:18:25.901412] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:00.561 [2024-12-06 18:18:25.901435] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:20:00.561 [2024-12-06 18:18:25.901512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:00.561 [2024-12-06 18:18:25.901659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:00.561 pt1 00:20:00.561 18:18:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.561 18:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:20:00.561 18:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:20:00.561 18:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:00.562 18:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:00.562 18:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:00.562 18:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:00.562 18:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:00.562 18:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:00.562 18:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:00.562 18:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:00.562 18:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:00.562 18:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.562 18:18:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.562 18:18:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.562 18:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.562 18:18:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.562 18:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:00.562 "name": "raid_bdev1", 00:20:00.562 "uuid": "8bbc9d4a-79eb-4e26-9601-82daeaafde94", 00:20:00.562 "strip_size_kb": 64, 00:20:00.562 "state": "configuring", 00:20:00.562 "raid_level": "raid5f", 00:20:00.562 "superblock": true, 00:20:00.562 "num_base_bdevs": 4, 00:20:00.562 "num_base_bdevs_discovered": 2, 00:20:00.562 "num_base_bdevs_operational": 3, 00:20:00.562 "base_bdevs_list": [ 00:20:00.562 { 00:20:00.562 "name": null, 00:20:00.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.562 "is_configured": false, 00:20:00.562 "data_offset": 2048, 00:20:00.562 "data_size": 63488 00:20:00.562 }, 00:20:00.562 { 00:20:00.562 "name": "pt2", 00:20:00.562 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:00.562 "is_configured": true, 00:20:00.562 "data_offset": 2048, 00:20:00.562 "data_size": 63488 00:20:00.562 }, 00:20:00.562 { 00:20:00.562 "name": "pt3", 00:20:00.562 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:00.562 "is_configured": true, 00:20:00.562 "data_offset": 2048, 00:20:00.562 "data_size": 63488 00:20:00.562 }, 00:20:00.562 { 00:20:00.562 "name": null, 00:20:00.562 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:00.562 "is_configured": false, 00:20:00.562 "data_offset": 2048, 00:20:00.562 "data_size": 63488 00:20:00.562 } 00:20:00.562 ] 00:20:00.562 }' 00:20:00.562 18:18:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:00.562 18:18:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.131 18:18:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:01.131 18:18:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:20:01.131 18:18:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.131 18:18:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.131 18:18:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.131 18:18:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:20:01.131 18:18:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:01.131 18:18:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.131 18:18:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.131 [2024-12-06 18:18:26.478167] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:01.131 [2024-12-06 18:18:26.478245] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:01.131 [2024-12-06 18:18:26.478281] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:20:01.131 [2024-12-06 18:18:26.478296] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:01.131 [2024-12-06 18:18:26.478910] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:01.131 [2024-12-06 18:18:26.479075] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:01.131 [2024-12-06 18:18:26.479213] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:20:01.131 [2024-12-06 18:18:26.479249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:01.131 [2024-12-06 18:18:26.479425] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:20:01.131 [2024-12-06 18:18:26.479440] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:01.131 [2024-12-06 18:18:26.479754] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:01.131 [2024-12-06 18:18:26.486211] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:20:01.131 [2024-12-06 18:18:26.486241] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:20:01.131 [2024-12-06 18:18:26.486591] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:01.131 pt4 00:20:01.131 18:18:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.131 18:18:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:01.131 18:18:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:01.131 18:18:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:01.131 18:18:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:01.131 18:18:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:01.131 18:18:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:01.131 18:18:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:01.131 18:18:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:01.131 18:18:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:01.131 18:18:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:01.131 18:18:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.131 18:18:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.131 18:18:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.131 18:18:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.131 18:18:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.131 18:18:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:01.131 "name": "raid_bdev1", 00:20:01.131 "uuid": "8bbc9d4a-79eb-4e26-9601-82daeaafde94", 00:20:01.131 "strip_size_kb": 64, 00:20:01.131 "state": "online", 00:20:01.131 "raid_level": "raid5f", 00:20:01.131 "superblock": true, 00:20:01.131 "num_base_bdevs": 4, 00:20:01.131 "num_base_bdevs_discovered": 3, 00:20:01.131 "num_base_bdevs_operational": 3, 00:20:01.131 "base_bdevs_list": [ 00:20:01.131 { 00:20:01.131 "name": null, 00:20:01.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.131 "is_configured": false, 00:20:01.131 "data_offset": 2048, 00:20:01.131 "data_size": 63488 00:20:01.131 }, 00:20:01.131 { 00:20:01.131 "name": "pt2", 00:20:01.131 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:01.131 "is_configured": true, 00:20:01.131 "data_offset": 2048, 00:20:01.131 "data_size": 63488 00:20:01.131 }, 00:20:01.131 { 00:20:01.131 "name": "pt3", 00:20:01.131 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:01.131 "is_configured": true, 00:20:01.131 "data_offset": 2048, 00:20:01.131 "data_size": 63488 00:20:01.131 }, 00:20:01.131 { 00:20:01.131 "name": "pt4", 00:20:01.131 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:01.131 "is_configured": true, 00:20:01.131 "data_offset": 2048, 00:20:01.131 "data_size": 63488 00:20:01.131 } 00:20:01.131 ] 00:20:01.131 }' 00:20:01.131 18:18:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:01.131 18:18:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.699 18:18:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:01.699 18:18:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:20:01.699 18:18:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.699 18:18:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.699 18:18:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.699 18:18:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:20:01.699 18:18:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:01.699 18:18:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:20:01.699 18:18:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.699 18:18:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.699 [2024-12-06 18:18:27.070337] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:01.699 18:18:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.699 18:18:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 8bbc9d4a-79eb-4e26-9601-82daeaafde94 '!=' 8bbc9d4a-79eb-4e26-9601-82daeaafde94 ']' 00:20:01.699 18:18:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84563 00:20:01.699 18:18:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84563 ']' 00:20:01.699 18:18:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84563 00:20:01.699 18:18:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:20:01.699 18:18:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:01.699 18:18:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84563 00:20:01.699 killing process with pid 84563 00:20:01.699 18:18:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:01.699 18:18:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:01.699 18:18:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84563' 00:20:01.699 18:18:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 84563 00:20:01.699 [2024-12-06 18:18:27.146946] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:01.699 18:18:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 84563 00:20:01.699 [2024-12-06 18:18:27.147053] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:01.699 [2024-12-06 18:18:27.147194] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:01.699 [2024-12-06 18:18:27.147217] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:20:02.307 [2024-12-06 18:18:27.501346] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:03.243 ************************************ 00:20:03.243 END TEST raid5f_superblock_test 00:20:03.243 ************************************ 00:20:03.243 18:18:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:20:03.243 00:20:03.243 real 0m9.683s 00:20:03.243 user 0m15.946s 00:20:03.243 sys 0m1.390s 00:20:03.243 18:18:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:03.243 18:18:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.243 18:18:28 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:20:03.243 18:18:28 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:20:03.243 18:18:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:03.243 18:18:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:03.243 18:18:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:03.243 ************************************ 00:20:03.243 START TEST raid5f_rebuild_test 00:20:03.243 ************************************ 00:20:03.243 18:18:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:20:03.243 18:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:20:03.243 18:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:20:03.243 18:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:20:03.243 18:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:03.243 18:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:03.243 18:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:03.243 18:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:03.243 18:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:03.243 18:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:03.243 18:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:03.243 18:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:03.243 18:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:03.244 18:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:03.244 18:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:20:03.244 18:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:03.244 18:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:03.244 18:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:20:03.244 18:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:03.244 18:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:03.244 18:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:03.244 18:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:03.244 18:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:03.244 18:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:03.244 18:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:03.244 18:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:03.244 18:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:03.244 18:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:20:03.244 18:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:20:03.244 18:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:20:03.244 18:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:20:03.244 18:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:20:03.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.244 18:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=85061 00:20:03.244 18:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 85061 00:20:03.244 18:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:03.244 18:18:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 85061 ']' 00:20:03.244 18:18:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.244 18:18:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:03.244 18:18:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.244 18:18:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:03.244 18:18:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.244 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:03.244 Zero copy mechanism will not be used. 00:20:03.244 [2024-12-06 18:18:28.744441] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:20:03.244 [2024-12-06 18:18:28.744629] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85061 ] 00:20:03.503 [2024-12-06 18:18:28.935138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.763 [2024-12-06 18:18:29.092224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.022 [2024-12-06 18:18:29.301549] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:04.022 [2024-12-06 18:18:29.301624] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:04.282 18:18:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:04.282 18:18:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:20:04.282 18:18:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:04.282 18:18:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:04.282 18:18:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.282 18:18:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.282 BaseBdev1_malloc 00:20:04.282 18:18:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.282 18:18:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:04.282 18:18:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.282 18:18:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.282 [2024-12-06 18:18:29.757632] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:04.282 [2024-12-06 18:18:29.757727] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:04.282 [2024-12-06 18:18:29.757764] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:04.282 [2024-12-06 18:18:29.757822] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:04.282 [2024-12-06 18:18:29.760651] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:04.282 [2024-12-06 18:18:29.760723] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:04.282 BaseBdev1 00:20:04.282 18:18:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.282 18:18:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:04.282 18:18:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:04.282 18:18:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.282 18:18:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.542 BaseBdev2_malloc 00:20:04.542 18:18:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.542 18:18:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:04.542 18:18:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.542 18:18:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.542 [2024-12-06 18:18:29.805883] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:04.542 [2024-12-06 18:18:29.805968] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:04.542 [2024-12-06 18:18:29.806007] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:04.542 [2024-12-06 18:18:29.806030] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:04.542 [2024-12-06 18:18:29.808788] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:04.542 [2024-12-06 18:18:29.808841] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:04.542 BaseBdev2 00:20:04.542 18:18:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.542 18:18:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:04.542 18:18:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:04.542 18:18:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.542 18:18:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.542 BaseBdev3_malloc 00:20:04.542 18:18:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.542 18:18:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:04.542 18:18:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.542 18:18:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.542 [2024-12-06 18:18:29.872082] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:04.542 [2024-12-06 18:18:29.872187] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:04.542 [2024-12-06 18:18:29.872222] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:04.542 [2024-12-06 18:18:29.872243] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:04.542 [2024-12-06 18:18:29.875127] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:04.542 [2024-12-06 18:18:29.875400] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:04.542 BaseBdev3 00:20:04.542 18:18:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.542 18:18:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:04.542 18:18:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:04.542 18:18:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.542 18:18:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.542 BaseBdev4_malloc 00:20:04.542 18:18:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.542 18:18:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:20:04.542 18:18:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.542 18:18:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.542 [2024-12-06 18:18:29.925997] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:20:04.542 [2024-12-06 18:18:29.926099] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:04.543 [2024-12-06 18:18:29.926132] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:04.543 [2024-12-06 18:18:29.926203] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:04.543 [2024-12-06 18:18:29.929378] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:04.543 [2024-12-06 18:18:29.929439] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:04.543 BaseBdev4 00:20:04.543 18:18:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.543 18:18:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:20:04.543 18:18:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.543 18:18:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.543 spare_malloc 00:20:04.543 18:18:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.543 18:18:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:04.543 18:18:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.543 18:18:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.543 spare_delay 00:20:04.543 18:18:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.543 18:18:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:04.543 18:18:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.543 18:18:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.543 [2024-12-06 18:18:29.986578] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:04.543 [2024-12-06 18:18:29.986654] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:04.543 [2024-12-06 18:18:29.986687] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:04.543 [2024-12-06 18:18:29.986709] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:04.543 [2024-12-06 18:18:29.989530] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:04.543 [2024-12-06 18:18:29.989587] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:04.543 spare 00:20:04.543 18:18:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.543 18:18:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:20:04.543 18:18:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.543 18:18:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.543 [2024-12-06 18:18:29.994659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:04.543 [2024-12-06 18:18:29.997153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:04.543 [2024-12-06 18:18:29.997432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:04.543 [2024-12-06 18:18:29.997548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:04.543 [2024-12-06 18:18:29.997689] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:04.543 [2024-12-06 18:18:29.997714] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:20:04.543 [2024-12-06 18:18:29.998124] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:04.543 [2024-12-06 18:18:30.004813] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:04.543 [2024-12-06 18:18:30.004840] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:04.543 [2024-12-06 18:18:30.005101] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:04.543 18:18:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.543 18:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:20:04.543 18:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:04.543 18:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:04.543 18:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:04.543 18:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:04.543 18:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:04.543 18:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:04.543 18:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:04.543 18:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:04.543 18:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:04.543 18:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.543 18:18:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.543 18:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.543 18:18:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.543 18:18:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.543 18:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:04.543 "name": "raid_bdev1", 00:20:04.543 "uuid": "98d1a9d6-a06c-4abd-b963-7c9716a886cb", 00:20:04.543 "strip_size_kb": 64, 00:20:04.543 "state": "online", 00:20:04.543 "raid_level": "raid5f", 00:20:04.543 "superblock": false, 00:20:04.543 "num_base_bdevs": 4, 00:20:04.543 "num_base_bdevs_discovered": 4, 00:20:04.543 "num_base_bdevs_operational": 4, 00:20:04.543 "base_bdevs_list": [ 00:20:04.543 { 00:20:04.543 "name": "BaseBdev1", 00:20:04.543 "uuid": "a1794c0f-eba0-595a-a7a7-20db86e2ef61", 00:20:04.543 "is_configured": true, 00:20:04.543 "data_offset": 0, 00:20:04.543 "data_size": 65536 00:20:04.543 }, 00:20:04.543 { 00:20:04.543 "name": "BaseBdev2", 00:20:04.543 "uuid": "32e3a011-4894-5a49-bef5-9594d9f19bbc", 00:20:04.543 "is_configured": true, 00:20:04.543 "data_offset": 0, 00:20:04.543 "data_size": 65536 00:20:04.543 }, 00:20:04.543 { 00:20:04.543 "name": "BaseBdev3", 00:20:04.543 "uuid": "02a52894-f223-55cc-83e9-5f06a416c17f", 00:20:04.543 "is_configured": true, 00:20:04.543 "data_offset": 0, 00:20:04.543 "data_size": 65536 00:20:04.543 }, 00:20:04.543 { 00:20:04.543 "name": "BaseBdev4", 00:20:04.543 "uuid": "9adcca4b-5e74-5b1e-87c7-509763095bc7", 00:20:04.543 "is_configured": true, 00:20:04.543 "data_offset": 0, 00:20:04.543 "data_size": 65536 00:20:04.543 } 00:20:04.543 ] 00:20:04.543 }' 00:20:04.543 18:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:04.543 18:18:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.110 18:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:05.110 18:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:05.110 18:18:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.110 18:18:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.110 [2024-12-06 18:18:30.493437] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:05.110 18:18:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.110 18:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:20:05.110 18:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.110 18:18:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.110 18:18:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.110 18:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:05.110 18:18:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.110 18:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:20:05.110 18:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:05.110 18:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:05.110 18:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:05.110 18:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:05.111 18:18:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:05.111 18:18:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:05.111 18:18:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:05.111 18:18:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:05.111 18:18:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:05.111 18:18:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:20:05.111 18:18:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:05.111 18:18:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:05.111 18:18:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:05.369 [2024-12-06 18:18:30.885411] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:05.627 /dev/nbd0 00:20:05.628 18:18:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:05.628 18:18:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:05.628 18:18:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:05.628 18:18:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:20:05.628 18:18:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:05.628 18:18:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:05.628 18:18:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:05.628 18:18:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:20:05.628 18:18:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:05.628 18:18:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:05.628 18:18:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:05.628 1+0 records in 00:20:05.628 1+0 records out 00:20:05.628 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306631 s, 13.4 MB/s 00:20:05.628 18:18:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:05.628 18:18:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:20:05.628 18:18:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:05.628 18:18:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:05.628 18:18:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:20:05.628 18:18:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:05.628 18:18:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:05.628 18:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:20:05.628 18:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:20:05.628 18:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:20:05.628 18:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:20:06.194 512+0 records in 00:20:06.194 512+0 records out 00:20:06.194 100663296 bytes (101 MB, 96 MiB) copied, 0.618723 s, 163 MB/s 00:20:06.194 18:18:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:06.194 18:18:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:06.194 18:18:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:06.194 18:18:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:06.194 18:18:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:20:06.194 18:18:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:06.194 18:18:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:06.453 18:18:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:06.453 [2024-12-06 18:18:31.833510] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:06.453 18:18:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:06.453 18:18:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:06.453 18:18:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:06.453 18:18:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:06.453 18:18:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:06.453 18:18:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:06.453 18:18:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:06.453 18:18:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:06.453 18:18:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.453 18:18:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.453 [2024-12-06 18:18:31.853397] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:06.453 18:18:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.453 18:18:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:06.453 18:18:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:06.453 18:18:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:06.453 18:18:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:06.453 18:18:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:06.453 18:18:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:06.453 18:18:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:06.453 18:18:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:06.453 18:18:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:06.453 18:18:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:06.453 18:18:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.453 18:18:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.453 18:18:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.453 18:18:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.453 18:18:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.453 18:18:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:06.453 "name": "raid_bdev1", 00:20:06.453 "uuid": "98d1a9d6-a06c-4abd-b963-7c9716a886cb", 00:20:06.453 "strip_size_kb": 64, 00:20:06.453 "state": "online", 00:20:06.453 "raid_level": "raid5f", 00:20:06.453 "superblock": false, 00:20:06.453 "num_base_bdevs": 4, 00:20:06.453 "num_base_bdevs_discovered": 3, 00:20:06.453 "num_base_bdevs_operational": 3, 00:20:06.453 "base_bdevs_list": [ 00:20:06.453 { 00:20:06.453 "name": null, 00:20:06.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:06.453 "is_configured": false, 00:20:06.453 "data_offset": 0, 00:20:06.453 "data_size": 65536 00:20:06.453 }, 00:20:06.453 { 00:20:06.453 "name": "BaseBdev2", 00:20:06.453 "uuid": "32e3a011-4894-5a49-bef5-9594d9f19bbc", 00:20:06.453 "is_configured": true, 00:20:06.453 "data_offset": 0, 00:20:06.453 "data_size": 65536 00:20:06.453 }, 00:20:06.453 { 00:20:06.453 "name": "BaseBdev3", 00:20:06.453 "uuid": "02a52894-f223-55cc-83e9-5f06a416c17f", 00:20:06.453 "is_configured": true, 00:20:06.453 "data_offset": 0, 00:20:06.453 "data_size": 65536 00:20:06.453 }, 00:20:06.453 { 00:20:06.453 "name": "BaseBdev4", 00:20:06.453 "uuid": "9adcca4b-5e74-5b1e-87c7-509763095bc7", 00:20:06.453 "is_configured": true, 00:20:06.453 "data_offset": 0, 00:20:06.453 "data_size": 65536 00:20:06.453 } 00:20:06.453 ] 00:20:06.453 }' 00:20:06.453 18:18:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:06.453 18:18:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.020 18:18:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:07.020 18:18:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.020 18:18:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.020 [2024-12-06 18:18:32.389553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:07.020 [2024-12-06 18:18:32.404937] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:20:07.020 18:18:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.020 18:18:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:07.020 [2024-12-06 18:18:32.414618] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:07.957 18:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:07.957 18:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:07.957 18:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:07.957 18:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:07.957 18:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:07.957 18:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.957 18:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.957 18:18:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.957 18:18:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.957 18:18:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.957 18:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:07.957 "name": "raid_bdev1", 00:20:07.957 "uuid": "98d1a9d6-a06c-4abd-b963-7c9716a886cb", 00:20:07.957 "strip_size_kb": 64, 00:20:07.957 "state": "online", 00:20:07.957 "raid_level": "raid5f", 00:20:07.957 "superblock": false, 00:20:07.957 "num_base_bdevs": 4, 00:20:07.957 "num_base_bdevs_discovered": 4, 00:20:07.957 "num_base_bdevs_operational": 4, 00:20:07.957 "process": { 00:20:07.957 "type": "rebuild", 00:20:07.957 "target": "spare", 00:20:07.957 "progress": { 00:20:07.957 "blocks": 17280, 00:20:07.957 "percent": 8 00:20:07.957 } 00:20:07.957 }, 00:20:07.957 "base_bdevs_list": [ 00:20:07.957 { 00:20:07.957 "name": "spare", 00:20:07.957 "uuid": "69de93cb-d4ce-514b-a284-616c56b871df", 00:20:07.957 "is_configured": true, 00:20:07.957 "data_offset": 0, 00:20:07.957 "data_size": 65536 00:20:07.957 }, 00:20:07.957 { 00:20:07.957 "name": "BaseBdev2", 00:20:07.957 "uuid": "32e3a011-4894-5a49-bef5-9594d9f19bbc", 00:20:07.957 "is_configured": true, 00:20:07.957 "data_offset": 0, 00:20:07.957 "data_size": 65536 00:20:07.957 }, 00:20:07.957 { 00:20:07.957 "name": "BaseBdev3", 00:20:07.957 "uuid": "02a52894-f223-55cc-83e9-5f06a416c17f", 00:20:07.957 "is_configured": true, 00:20:07.957 "data_offset": 0, 00:20:07.957 "data_size": 65536 00:20:07.957 }, 00:20:07.957 { 00:20:07.957 "name": "BaseBdev4", 00:20:07.957 "uuid": "9adcca4b-5e74-5b1e-87c7-509763095bc7", 00:20:07.957 "is_configured": true, 00:20:07.957 "data_offset": 0, 00:20:07.957 "data_size": 65536 00:20:07.957 } 00:20:07.957 ] 00:20:07.957 }' 00:20:07.957 18:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:08.217 18:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:08.217 18:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:08.217 18:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:08.217 18:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:08.217 18:18:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.217 18:18:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.217 [2024-12-06 18:18:33.580330] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:08.217 [2024-12-06 18:18:33.628527] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:08.217 [2024-12-06 18:18:33.628632] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:08.217 [2024-12-06 18:18:33.628664] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:08.217 [2024-12-06 18:18:33.628683] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:08.217 18:18:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.217 18:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:08.217 18:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:08.217 18:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:08.217 18:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:08.217 18:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:08.217 18:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:08.217 18:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:08.217 18:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:08.217 18:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:08.217 18:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:08.217 18:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.217 18:18:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.217 18:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.217 18:18:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.217 18:18:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.217 18:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:08.217 "name": "raid_bdev1", 00:20:08.217 "uuid": "98d1a9d6-a06c-4abd-b963-7c9716a886cb", 00:20:08.217 "strip_size_kb": 64, 00:20:08.217 "state": "online", 00:20:08.217 "raid_level": "raid5f", 00:20:08.217 "superblock": false, 00:20:08.217 "num_base_bdevs": 4, 00:20:08.217 "num_base_bdevs_discovered": 3, 00:20:08.217 "num_base_bdevs_operational": 3, 00:20:08.217 "base_bdevs_list": [ 00:20:08.217 { 00:20:08.217 "name": null, 00:20:08.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.217 "is_configured": false, 00:20:08.217 "data_offset": 0, 00:20:08.217 "data_size": 65536 00:20:08.217 }, 00:20:08.217 { 00:20:08.217 "name": "BaseBdev2", 00:20:08.217 "uuid": "32e3a011-4894-5a49-bef5-9594d9f19bbc", 00:20:08.217 "is_configured": true, 00:20:08.217 "data_offset": 0, 00:20:08.217 "data_size": 65536 00:20:08.217 }, 00:20:08.217 { 00:20:08.217 "name": "BaseBdev3", 00:20:08.217 "uuid": "02a52894-f223-55cc-83e9-5f06a416c17f", 00:20:08.217 "is_configured": true, 00:20:08.217 "data_offset": 0, 00:20:08.217 "data_size": 65536 00:20:08.217 }, 00:20:08.217 { 00:20:08.217 "name": "BaseBdev4", 00:20:08.217 "uuid": "9adcca4b-5e74-5b1e-87c7-509763095bc7", 00:20:08.217 "is_configured": true, 00:20:08.217 "data_offset": 0, 00:20:08.217 "data_size": 65536 00:20:08.218 } 00:20:08.218 ] 00:20:08.218 }' 00:20:08.218 18:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:08.218 18:18:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.799 18:18:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:08.799 18:18:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:08.799 18:18:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:08.799 18:18:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:08.799 18:18:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:08.799 18:18:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.799 18:18:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.799 18:18:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.799 18:18:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.799 18:18:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.799 18:18:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:08.799 "name": "raid_bdev1", 00:20:08.799 "uuid": "98d1a9d6-a06c-4abd-b963-7c9716a886cb", 00:20:08.799 "strip_size_kb": 64, 00:20:08.799 "state": "online", 00:20:08.799 "raid_level": "raid5f", 00:20:08.799 "superblock": false, 00:20:08.799 "num_base_bdevs": 4, 00:20:08.799 "num_base_bdevs_discovered": 3, 00:20:08.799 "num_base_bdevs_operational": 3, 00:20:08.799 "base_bdevs_list": [ 00:20:08.799 { 00:20:08.799 "name": null, 00:20:08.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.799 "is_configured": false, 00:20:08.799 "data_offset": 0, 00:20:08.799 "data_size": 65536 00:20:08.799 }, 00:20:08.799 { 00:20:08.799 "name": "BaseBdev2", 00:20:08.799 "uuid": "32e3a011-4894-5a49-bef5-9594d9f19bbc", 00:20:08.799 "is_configured": true, 00:20:08.799 "data_offset": 0, 00:20:08.799 "data_size": 65536 00:20:08.799 }, 00:20:08.799 { 00:20:08.799 "name": "BaseBdev3", 00:20:08.799 "uuid": "02a52894-f223-55cc-83e9-5f06a416c17f", 00:20:08.799 "is_configured": true, 00:20:08.799 "data_offset": 0, 00:20:08.799 "data_size": 65536 00:20:08.799 }, 00:20:08.799 { 00:20:08.799 "name": "BaseBdev4", 00:20:08.799 "uuid": "9adcca4b-5e74-5b1e-87c7-509763095bc7", 00:20:08.799 "is_configured": true, 00:20:08.799 "data_offset": 0, 00:20:08.799 "data_size": 65536 00:20:08.799 } 00:20:08.799 ] 00:20:08.799 }' 00:20:08.799 18:18:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:08.799 18:18:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:08.799 18:18:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:08.799 18:18:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:08.799 18:18:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:08.799 18:18:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.799 18:18:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.799 [2024-12-06 18:18:34.316452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:09.058 [2024-12-06 18:18:34.331155] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:20:09.058 18:18:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.058 18:18:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:09.058 [2024-12-06 18:18:34.340867] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:09.994 18:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:09.994 18:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:09.994 18:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:09.994 18:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:09.994 18:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:09.994 18:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.994 18:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:09.994 18:18:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.994 18:18:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.994 18:18:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.994 18:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:09.994 "name": "raid_bdev1", 00:20:09.994 "uuid": "98d1a9d6-a06c-4abd-b963-7c9716a886cb", 00:20:09.994 "strip_size_kb": 64, 00:20:09.994 "state": "online", 00:20:09.994 "raid_level": "raid5f", 00:20:09.994 "superblock": false, 00:20:09.994 "num_base_bdevs": 4, 00:20:09.994 "num_base_bdevs_discovered": 4, 00:20:09.994 "num_base_bdevs_operational": 4, 00:20:09.994 "process": { 00:20:09.994 "type": "rebuild", 00:20:09.994 "target": "spare", 00:20:09.994 "progress": { 00:20:09.994 "blocks": 17280, 00:20:09.994 "percent": 8 00:20:09.994 } 00:20:09.994 }, 00:20:09.994 "base_bdevs_list": [ 00:20:09.994 { 00:20:09.994 "name": "spare", 00:20:09.994 "uuid": "69de93cb-d4ce-514b-a284-616c56b871df", 00:20:09.994 "is_configured": true, 00:20:09.994 "data_offset": 0, 00:20:09.994 "data_size": 65536 00:20:09.994 }, 00:20:09.994 { 00:20:09.994 "name": "BaseBdev2", 00:20:09.994 "uuid": "32e3a011-4894-5a49-bef5-9594d9f19bbc", 00:20:09.994 "is_configured": true, 00:20:09.994 "data_offset": 0, 00:20:09.994 "data_size": 65536 00:20:09.994 }, 00:20:09.994 { 00:20:09.994 "name": "BaseBdev3", 00:20:09.994 "uuid": "02a52894-f223-55cc-83e9-5f06a416c17f", 00:20:09.994 "is_configured": true, 00:20:09.994 "data_offset": 0, 00:20:09.994 "data_size": 65536 00:20:09.994 }, 00:20:09.994 { 00:20:09.994 "name": "BaseBdev4", 00:20:09.994 "uuid": "9adcca4b-5e74-5b1e-87c7-509763095bc7", 00:20:09.994 "is_configured": true, 00:20:09.994 "data_offset": 0, 00:20:09.994 "data_size": 65536 00:20:09.994 } 00:20:09.994 ] 00:20:09.994 }' 00:20:09.994 18:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:09.994 18:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:09.994 18:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:09.994 18:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:09.994 18:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:20:09.994 18:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:20:09.994 18:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:20:09.994 18:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=671 00:20:09.994 18:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:09.994 18:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:09.994 18:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:09.994 18:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:09.994 18:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:09.994 18:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:09.994 18:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.994 18:18:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.994 18:18:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.994 18:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.253 18:18:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.253 18:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:10.253 "name": "raid_bdev1", 00:20:10.253 "uuid": "98d1a9d6-a06c-4abd-b963-7c9716a886cb", 00:20:10.253 "strip_size_kb": 64, 00:20:10.253 "state": "online", 00:20:10.253 "raid_level": "raid5f", 00:20:10.253 "superblock": false, 00:20:10.253 "num_base_bdevs": 4, 00:20:10.253 "num_base_bdevs_discovered": 4, 00:20:10.253 "num_base_bdevs_operational": 4, 00:20:10.253 "process": { 00:20:10.253 "type": "rebuild", 00:20:10.253 "target": "spare", 00:20:10.253 "progress": { 00:20:10.253 "blocks": 21120, 00:20:10.253 "percent": 10 00:20:10.253 } 00:20:10.253 }, 00:20:10.253 "base_bdevs_list": [ 00:20:10.253 { 00:20:10.253 "name": "spare", 00:20:10.253 "uuid": "69de93cb-d4ce-514b-a284-616c56b871df", 00:20:10.253 "is_configured": true, 00:20:10.253 "data_offset": 0, 00:20:10.253 "data_size": 65536 00:20:10.253 }, 00:20:10.253 { 00:20:10.253 "name": "BaseBdev2", 00:20:10.253 "uuid": "32e3a011-4894-5a49-bef5-9594d9f19bbc", 00:20:10.253 "is_configured": true, 00:20:10.253 "data_offset": 0, 00:20:10.253 "data_size": 65536 00:20:10.253 }, 00:20:10.253 { 00:20:10.253 "name": "BaseBdev3", 00:20:10.253 "uuid": "02a52894-f223-55cc-83e9-5f06a416c17f", 00:20:10.253 "is_configured": true, 00:20:10.253 "data_offset": 0, 00:20:10.253 "data_size": 65536 00:20:10.253 }, 00:20:10.253 { 00:20:10.253 "name": "BaseBdev4", 00:20:10.253 "uuid": "9adcca4b-5e74-5b1e-87c7-509763095bc7", 00:20:10.253 "is_configured": true, 00:20:10.253 "data_offset": 0, 00:20:10.253 "data_size": 65536 00:20:10.253 } 00:20:10.253 ] 00:20:10.253 }' 00:20:10.253 18:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:10.253 18:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:10.253 18:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:10.253 18:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:10.253 18:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:11.187 18:18:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:11.187 18:18:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:11.187 18:18:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:11.187 18:18:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:11.187 18:18:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:11.187 18:18:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:11.187 18:18:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.187 18:18:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.187 18:18:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.187 18:18:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.187 18:18:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.446 18:18:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:11.446 "name": "raid_bdev1", 00:20:11.446 "uuid": "98d1a9d6-a06c-4abd-b963-7c9716a886cb", 00:20:11.446 "strip_size_kb": 64, 00:20:11.446 "state": "online", 00:20:11.446 "raid_level": "raid5f", 00:20:11.446 "superblock": false, 00:20:11.446 "num_base_bdevs": 4, 00:20:11.446 "num_base_bdevs_discovered": 4, 00:20:11.446 "num_base_bdevs_operational": 4, 00:20:11.446 "process": { 00:20:11.446 "type": "rebuild", 00:20:11.446 "target": "spare", 00:20:11.446 "progress": { 00:20:11.446 "blocks": 44160, 00:20:11.446 "percent": 22 00:20:11.446 } 00:20:11.446 }, 00:20:11.446 "base_bdevs_list": [ 00:20:11.446 { 00:20:11.446 "name": "spare", 00:20:11.446 "uuid": "69de93cb-d4ce-514b-a284-616c56b871df", 00:20:11.446 "is_configured": true, 00:20:11.446 "data_offset": 0, 00:20:11.446 "data_size": 65536 00:20:11.446 }, 00:20:11.446 { 00:20:11.446 "name": "BaseBdev2", 00:20:11.446 "uuid": "32e3a011-4894-5a49-bef5-9594d9f19bbc", 00:20:11.446 "is_configured": true, 00:20:11.446 "data_offset": 0, 00:20:11.446 "data_size": 65536 00:20:11.446 }, 00:20:11.446 { 00:20:11.446 "name": "BaseBdev3", 00:20:11.446 "uuid": "02a52894-f223-55cc-83e9-5f06a416c17f", 00:20:11.446 "is_configured": true, 00:20:11.446 "data_offset": 0, 00:20:11.446 "data_size": 65536 00:20:11.446 }, 00:20:11.446 { 00:20:11.446 "name": "BaseBdev4", 00:20:11.446 "uuid": "9adcca4b-5e74-5b1e-87c7-509763095bc7", 00:20:11.446 "is_configured": true, 00:20:11.446 "data_offset": 0, 00:20:11.446 "data_size": 65536 00:20:11.446 } 00:20:11.446 ] 00:20:11.446 }' 00:20:11.446 18:18:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:11.446 18:18:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:11.446 18:18:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:11.446 18:18:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:11.446 18:18:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:12.434 18:18:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:12.434 18:18:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:12.434 18:18:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:12.434 18:18:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:12.434 18:18:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:12.434 18:18:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:12.434 18:18:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:12.434 18:18:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.434 18:18:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.434 18:18:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.434 18:18:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.434 18:18:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:12.434 "name": "raid_bdev1", 00:20:12.434 "uuid": "98d1a9d6-a06c-4abd-b963-7c9716a886cb", 00:20:12.434 "strip_size_kb": 64, 00:20:12.435 "state": "online", 00:20:12.435 "raid_level": "raid5f", 00:20:12.435 "superblock": false, 00:20:12.435 "num_base_bdevs": 4, 00:20:12.435 "num_base_bdevs_discovered": 4, 00:20:12.435 "num_base_bdevs_operational": 4, 00:20:12.435 "process": { 00:20:12.435 "type": "rebuild", 00:20:12.435 "target": "spare", 00:20:12.435 "progress": { 00:20:12.435 "blocks": 65280, 00:20:12.435 "percent": 33 00:20:12.435 } 00:20:12.435 }, 00:20:12.435 "base_bdevs_list": [ 00:20:12.435 { 00:20:12.435 "name": "spare", 00:20:12.435 "uuid": "69de93cb-d4ce-514b-a284-616c56b871df", 00:20:12.435 "is_configured": true, 00:20:12.435 "data_offset": 0, 00:20:12.435 "data_size": 65536 00:20:12.435 }, 00:20:12.435 { 00:20:12.435 "name": "BaseBdev2", 00:20:12.435 "uuid": "32e3a011-4894-5a49-bef5-9594d9f19bbc", 00:20:12.435 "is_configured": true, 00:20:12.435 "data_offset": 0, 00:20:12.435 "data_size": 65536 00:20:12.435 }, 00:20:12.435 { 00:20:12.435 "name": "BaseBdev3", 00:20:12.435 "uuid": "02a52894-f223-55cc-83e9-5f06a416c17f", 00:20:12.435 "is_configured": true, 00:20:12.435 "data_offset": 0, 00:20:12.435 "data_size": 65536 00:20:12.435 }, 00:20:12.435 { 00:20:12.435 "name": "BaseBdev4", 00:20:12.435 "uuid": "9adcca4b-5e74-5b1e-87c7-509763095bc7", 00:20:12.435 "is_configured": true, 00:20:12.435 "data_offset": 0, 00:20:12.435 "data_size": 65536 00:20:12.435 } 00:20:12.435 ] 00:20:12.435 }' 00:20:12.435 18:18:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:12.691 18:18:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:12.691 18:18:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:12.691 18:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:12.691 18:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:13.623 18:18:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:13.623 18:18:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:13.623 18:18:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:13.623 18:18:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:13.623 18:18:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:13.623 18:18:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:13.623 18:18:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.623 18:18:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.623 18:18:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.623 18:18:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.623 18:18:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.623 18:18:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:13.623 "name": "raid_bdev1", 00:20:13.623 "uuid": "98d1a9d6-a06c-4abd-b963-7c9716a886cb", 00:20:13.623 "strip_size_kb": 64, 00:20:13.623 "state": "online", 00:20:13.623 "raid_level": "raid5f", 00:20:13.623 "superblock": false, 00:20:13.623 "num_base_bdevs": 4, 00:20:13.623 "num_base_bdevs_discovered": 4, 00:20:13.623 "num_base_bdevs_operational": 4, 00:20:13.623 "process": { 00:20:13.623 "type": "rebuild", 00:20:13.623 "target": "spare", 00:20:13.623 "progress": { 00:20:13.623 "blocks": 88320, 00:20:13.623 "percent": 44 00:20:13.623 } 00:20:13.623 }, 00:20:13.623 "base_bdevs_list": [ 00:20:13.623 { 00:20:13.623 "name": "spare", 00:20:13.623 "uuid": "69de93cb-d4ce-514b-a284-616c56b871df", 00:20:13.623 "is_configured": true, 00:20:13.623 "data_offset": 0, 00:20:13.623 "data_size": 65536 00:20:13.623 }, 00:20:13.623 { 00:20:13.623 "name": "BaseBdev2", 00:20:13.623 "uuid": "32e3a011-4894-5a49-bef5-9594d9f19bbc", 00:20:13.623 "is_configured": true, 00:20:13.623 "data_offset": 0, 00:20:13.623 "data_size": 65536 00:20:13.623 }, 00:20:13.623 { 00:20:13.623 "name": "BaseBdev3", 00:20:13.623 "uuid": "02a52894-f223-55cc-83e9-5f06a416c17f", 00:20:13.623 "is_configured": true, 00:20:13.623 "data_offset": 0, 00:20:13.623 "data_size": 65536 00:20:13.623 }, 00:20:13.623 { 00:20:13.623 "name": "BaseBdev4", 00:20:13.623 "uuid": "9adcca4b-5e74-5b1e-87c7-509763095bc7", 00:20:13.623 "is_configured": true, 00:20:13.623 "data_offset": 0, 00:20:13.623 "data_size": 65536 00:20:13.623 } 00:20:13.623 ] 00:20:13.623 }' 00:20:13.623 18:18:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:13.623 18:18:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:13.623 18:18:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:13.881 18:18:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:13.881 18:18:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:14.812 18:18:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:14.812 18:18:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:14.812 18:18:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:14.812 18:18:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:14.812 18:18:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:14.812 18:18:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:14.812 18:18:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.812 18:18:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.812 18:18:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.812 18:18:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.812 18:18:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.812 18:18:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:14.812 "name": "raid_bdev1", 00:20:14.812 "uuid": "98d1a9d6-a06c-4abd-b963-7c9716a886cb", 00:20:14.812 "strip_size_kb": 64, 00:20:14.812 "state": "online", 00:20:14.812 "raid_level": "raid5f", 00:20:14.812 "superblock": false, 00:20:14.812 "num_base_bdevs": 4, 00:20:14.812 "num_base_bdevs_discovered": 4, 00:20:14.812 "num_base_bdevs_operational": 4, 00:20:14.812 "process": { 00:20:14.812 "type": "rebuild", 00:20:14.812 "target": "spare", 00:20:14.812 "progress": { 00:20:14.812 "blocks": 109440, 00:20:14.812 "percent": 55 00:20:14.812 } 00:20:14.812 }, 00:20:14.812 "base_bdevs_list": [ 00:20:14.812 { 00:20:14.812 "name": "spare", 00:20:14.812 "uuid": "69de93cb-d4ce-514b-a284-616c56b871df", 00:20:14.812 "is_configured": true, 00:20:14.812 "data_offset": 0, 00:20:14.812 "data_size": 65536 00:20:14.812 }, 00:20:14.812 { 00:20:14.812 "name": "BaseBdev2", 00:20:14.812 "uuid": "32e3a011-4894-5a49-bef5-9594d9f19bbc", 00:20:14.812 "is_configured": true, 00:20:14.812 "data_offset": 0, 00:20:14.812 "data_size": 65536 00:20:14.812 }, 00:20:14.812 { 00:20:14.812 "name": "BaseBdev3", 00:20:14.812 "uuid": "02a52894-f223-55cc-83e9-5f06a416c17f", 00:20:14.812 "is_configured": true, 00:20:14.812 "data_offset": 0, 00:20:14.812 "data_size": 65536 00:20:14.812 }, 00:20:14.812 { 00:20:14.812 "name": "BaseBdev4", 00:20:14.812 "uuid": "9adcca4b-5e74-5b1e-87c7-509763095bc7", 00:20:14.812 "is_configured": true, 00:20:14.812 "data_offset": 0, 00:20:14.812 "data_size": 65536 00:20:14.812 } 00:20:14.812 ] 00:20:14.812 }' 00:20:14.812 18:18:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:14.812 18:18:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:14.812 18:18:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:14.812 18:18:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:14.812 18:18:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:16.188 18:18:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:16.188 18:18:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:16.188 18:18:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:16.188 18:18:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:16.188 18:18:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:16.188 18:18:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:16.188 18:18:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.188 18:18:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.188 18:18:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.188 18:18:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.188 18:18:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.188 18:18:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:16.188 "name": "raid_bdev1", 00:20:16.188 "uuid": "98d1a9d6-a06c-4abd-b963-7c9716a886cb", 00:20:16.188 "strip_size_kb": 64, 00:20:16.188 "state": "online", 00:20:16.188 "raid_level": "raid5f", 00:20:16.188 "superblock": false, 00:20:16.188 "num_base_bdevs": 4, 00:20:16.188 "num_base_bdevs_discovered": 4, 00:20:16.188 "num_base_bdevs_operational": 4, 00:20:16.188 "process": { 00:20:16.188 "type": "rebuild", 00:20:16.188 "target": "spare", 00:20:16.188 "progress": { 00:20:16.188 "blocks": 132480, 00:20:16.188 "percent": 67 00:20:16.188 } 00:20:16.188 }, 00:20:16.188 "base_bdevs_list": [ 00:20:16.188 { 00:20:16.188 "name": "spare", 00:20:16.188 "uuid": "69de93cb-d4ce-514b-a284-616c56b871df", 00:20:16.188 "is_configured": true, 00:20:16.188 "data_offset": 0, 00:20:16.188 "data_size": 65536 00:20:16.188 }, 00:20:16.188 { 00:20:16.188 "name": "BaseBdev2", 00:20:16.188 "uuid": "32e3a011-4894-5a49-bef5-9594d9f19bbc", 00:20:16.188 "is_configured": true, 00:20:16.188 "data_offset": 0, 00:20:16.188 "data_size": 65536 00:20:16.188 }, 00:20:16.188 { 00:20:16.188 "name": "BaseBdev3", 00:20:16.188 "uuid": "02a52894-f223-55cc-83e9-5f06a416c17f", 00:20:16.188 "is_configured": true, 00:20:16.188 "data_offset": 0, 00:20:16.188 "data_size": 65536 00:20:16.188 }, 00:20:16.188 { 00:20:16.188 "name": "BaseBdev4", 00:20:16.188 "uuid": "9adcca4b-5e74-5b1e-87c7-509763095bc7", 00:20:16.188 "is_configured": true, 00:20:16.188 "data_offset": 0, 00:20:16.188 "data_size": 65536 00:20:16.188 } 00:20:16.188 ] 00:20:16.188 }' 00:20:16.188 18:18:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:16.188 18:18:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:16.188 18:18:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:16.188 18:18:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:16.188 18:18:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:17.127 18:18:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:17.127 18:18:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:17.127 18:18:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:17.127 18:18:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:17.127 18:18:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:17.127 18:18:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:17.127 18:18:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.127 18:18:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.127 18:18:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.127 18:18:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.127 18:18:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.127 18:18:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:17.127 "name": "raid_bdev1", 00:20:17.127 "uuid": "98d1a9d6-a06c-4abd-b963-7c9716a886cb", 00:20:17.127 "strip_size_kb": 64, 00:20:17.127 "state": "online", 00:20:17.127 "raid_level": "raid5f", 00:20:17.127 "superblock": false, 00:20:17.127 "num_base_bdevs": 4, 00:20:17.127 "num_base_bdevs_discovered": 4, 00:20:17.127 "num_base_bdevs_operational": 4, 00:20:17.127 "process": { 00:20:17.127 "type": "rebuild", 00:20:17.127 "target": "spare", 00:20:17.127 "progress": { 00:20:17.127 "blocks": 153600, 00:20:17.127 "percent": 78 00:20:17.127 } 00:20:17.128 }, 00:20:17.128 "base_bdevs_list": [ 00:20:17.128 { 00:20:17.128 "name": "spare", 00:20:17.128 "uuid": "69de93cb-d4ce-514b-a284-616c56b871df", 00:20:17.128 "is_configured": true, 00:20:17.128 "data_offset": 0, 00:20:17.128 "data_size": 65536 00:20:17.128 }, 00:20:17.128 { 00:20:17.128 "name": "BaseBdev2", 00:20:17.128 "uuid": "32e3a011-4894-5a49-bef5-9594d9f19bbc", 00:20:17.128 "is_configured": true, 00:20:17.128 "data_offset": 0, 00:20:17.128 "data_size": 65536 00:20:17.128 }, 00:20:17.128 { 00:20:17.128 "name": "BaseBdev3", 00:20:17.128 "uuid": "02a52894-f223-55cc-83e9-5f06a416c17f", 00:20:17.128 "is_configured": true, 00:20:17.128 "data_offset": 0, 00:20:17.128 "data_size": 65536 00:20:17.128 }, 00:20:17.128 { 00:20:17.128 "name": "BaseBdev4", 00:20:17.128 "uuid": "9adcca4b-5e74-5b1e-87c7-509763095bc7", 00:20:17.128 "is_configured": true, 00:20:17.128 "data_offset": 0, 00:20:17.128 "data_size": 65536 00:20:17.128 } 00:20:17.128 ] 00:20:17.128 }' 00:20:17.128 18:18:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:17.128 18:18:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:17.128 18:18:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:17.387 18:18:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:17.387 18:18:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:18.324 18:18:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:18.324 18:18:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:18.324 18:18:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:18.324 18:18:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:18.324 18:18:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:18.324 18:18:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:18.324 18:18:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.324 18:18:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.325 18:18:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.325 18:18:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.325 18:18:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.325 18:18:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:18.325 "name": "raid_bdev1", 00:20:18.325 "uuid": "98d1a9d6-a06c-4abd-b963-7c9716a886cb", 00:20:18.325 "strip_size_kb": 64, 00:20:18.325 "state": "online", 00:20:18.325 "raid_level": "raid5f", 00:20:18.325 "superblock": false, 00:20:18.325 "num_base_bdevs": 4, 00:20:18.325 "num_base_bdevs_discovered": 4, 00:20:18.325 "num_base_bdevs_operational": 4, 00:20:18.325 "process": { 00:20:18.325 "type": "rebuild", 00:20:18.325 "target": "spare", 00:20:18.325 "progress": { 00:20:18.325 "blocks": 176640, 00:20:18.325 "percent": 89 00:20:18.325 } 00:20:18.325 }, 00:20:18.325 "base_bdevs_list": [ 00:20:18.325 { 00:20:18.325 "name": "spare", 00:20:18.325 "uuid": "69de93cb-d4ce-514b-a284-616c56b871df", 00:20:18.325 "is_configured": true, 00:20:18.325 "data_offset": 0, 00:20:18.325 "data_size": 65536 00:20:18.325 }, 00:20:18.325 { 00:20:18.325 "name": "BaseBdev2", 00:20:18.325 "uuid": "32e3a011-4894-5a49-bef5-9594d9f19bbc", 00:20:18.325 "is_configured": true, 00:20:18.325 "data_offset": 0, 00:20:18.325 "data_size": 65536 00:20:18.325 }, 00:20:18.325 { 00:20:18.325 "name": "BaseBdev3", 00:20:18.325 "uuid": "02a52894-f223-55cc-83e9-5f06a416c17f", 00:20:18.325 "is_configured": true, 00:20:18.325 "data_offset": 0, 00:20:18.325 "data_size": 65536 00:20:18.325 }, 00:20:18.325 { 00:20:18.325 "name": "BaseBdev4", 00:20:18.325 "uuid": "9adcca4b-5e74-5b1e-87c7-509763095bc7", 00:20:18.325 "is_configured": true, 00:20:18.325 "data_offset": 0, 00:20:18.325 "data_size": 65536 00:20:18.325 } 00:20:18.325 ] 00:20:18.325 }' 00:20:18.325 18:18:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:18.325 18:18:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:18.325 18:18:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:18.325 18:18:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:18.325 18:18:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:19.262 [2024-12-06 18:18:44.755741] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:19.262 [2024-12-06 18:18:44.756046] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:19.262 [2024-12-06 18:18:44.756254] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:19.596 18:18:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:19.596 18:18:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:19.596 18:18:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:19.596 18:18:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:19.596 18:18:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:19.596 18:18:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:19.596 18:18:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.596 18:18:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.596 18:18:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.596 18:18:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:19.596 18:18:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.596 18:18:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:19.596 "name": "raid_bdev1", 00:20:19.596 "uuid": "98d1a9d6-a06c-4abd-b963-7c9716a886cb", 00:20:19.596 "strip_size_kb": 64, 00:20:19.596 "state": "online", 00:20:19.596 "raid_level": "raid5f", 00:20:19.596 "superblock": false, 00:20:19.596 "num_base_bdevs": 4, 00:20:19.596 "num_base_bdevs_discovered": 4, 00:20:19.596 "num_base_bdevs_operational": 4, 00:20:19.596 "base_bdevs_list": [ 00:20:19.596 { 00:20:19.596 "name": "spare", 00:20:19.596 "uuid": "69de93cb-d4ce-514b-a284-616c56b871df", 00:20:19.596 "is_configured": true, 00:20:19.596 "data_offset": 0, 00:20:19.596 "data_size": 65536 00:20:19.596 }, 00:20:19.596 { 00:20:19.596 "name": "BaseBdev2", 00:20:19.596 "uuid": "32e3a011-4894-5a49-bef5-9594d9f19bbc", 00:20:19.596 "is_configured": true, 00:20:19.596 "data_offset": 0, 00:20:19.596 "data_size": 65536 00:20:19.596 }, 00:20:19.596 { 00:20:19.596 "name": "BaseBdev3", 00:20:19.596 "uuid": "02a52894-f223-55cc-83e9-5f06a416c17f", 00:20:19.596 "is_configured": true, 00:20:19.596 "data_offset": 0, 00:20:19.596 "data_size": 65536 00:20:19.596 }, 00:20:19.596 { 00:20:19.596 "name": "BaseBdev4", 00:20:19.596 "uuid": "9adcca4b-5e74-5b1e-87c7-509763095bc7", 00:20:19.596 "is_configured": true, 00:20:19.596 "data_offset": 0, 00:20:19.596 "data_size": 65536 00:20:19.596 } 00:20:19.596 ] 00:20:19.596 }' 00:20:19.596 18:18:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:19.596 18:18:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:19.596 18:18:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:19.596 18:18:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:19.596 18:18:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:20:19.596 18:18:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:19.596 18:18:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:19.596 18:18:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:19.596 18:18:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:19.596 18:18:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:19.596 18:18:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.596 18:18:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.596 18:18:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.596 18:18:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:19.596 18:18:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.596 18:18:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:19.596 "name": "raid_bdev1", 00:20:19.596 "uuid": "98d1a9d6-a06c-4abd-b963-7c9716a886cb", 00:20:19.596 "strip_size_kb": 64, 00:20:19.596 "state": "online", 00:20:19.596 "raid_level": "raid5f", 00:20:19.596 "superblock": false, 00:20:19.596 "num_base_bdevs": 4, 00:20:19.596 "num_base_bdevs_discovered": 4, 00:20:19.596 "num_base_bdevs_operational": 4, 00:20:19.596 "base_bdevs_list": [ 00:20:19.596 { 00:20:19.596 "name": "spare", 00:20:19.596 "uuid": "69de93cb-d4ce-514b-a284-616c56b871df", 00:20:19.596 "is_configured": true, 00:20:19.596 "data_offset": 0, 00:20:19.596 "data_size": 65536 00:20:19.596 }, 00:20:19.596 { 00:20:19.596 "name": "BaseBdev2", 00:20:19.596 "uuid": "32e3a011-4894-5a49-bef5-9594d9f19bbc", 00:20:19.596 "is_configured": true, 00:20:19.596 "data_offset": 0, 00:20:19.596 "data_size": 65536 00:20:19.596 }, 00:20:19.596 { 00:20:19.596 "name": "BaseBdev3", 00:20:19.596 "uuid": "02a52894-f223-55cc-83e9-5f06a416c17f", 00:20:19.596 "is_configured": true, 00:20:19.596 "data_offset": 0, 00:20:19.596 "data_size": 65536 00:20:19.596 }, 00:20:19.596 { 00:20:19.596 "name": "BaseBdev4", 00:20:19.596 "uuid": "9adcca4b-5e74-5b1e-87c7-509763095bc7", 00:20:19.596 "is_configured": true, 00:20:19.596 "data_offset": 0, 00:20:19.596 "data_size": 65536 00:20:19.596 } 00:20:19.596 ] 00:20:19.596 }' 00:20:19.596 18:18:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:19.854 18:18:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:19.854 18:18:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:19.854 18:18:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:19.854 18:18:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:20:19.854 18:18:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:19.854 18:18:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:19.854 18:18:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:19.854 18:18:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:19.854 18:18:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:19.854 18:18:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:19.854 18:18:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:19.854 18:18:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:19.854 18:18:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:19.854 18:18:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.854 18:18:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.854 18:18:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:19.854 18:18:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.854 18:18:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.854 18:18:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:19.854 "name": "raid_bdev1", 00:20:19.854 "uuid": "98d1a9d6-a06c-4abd-b963-7c9716a886cb", 00:20:19.854 "strip_size_kb": 64, 00:20:19.854 "state": "online", 00:20:19.854 "raid_level": "raid5f", 00:20:19.854 "superblock": false, 00:20:19.854 "num_base_bdevs": 4, 00:20:19.854 "num_base_bdevs_discovered": 4, 00:20:19.854 "num_base_bdevs_operational": 4, 00:20:19.854 "base_bdevs_list": [ 00:20:19.854 { 00:20:19.854 "name": "spare", 00:20:19.854 "uuid": "69de93cb-d4ce-514b-a284-616c56b871df", 00:20:19.854 "is_configured": true, 00:20:19.854 "data_offset": 0, 00:20:19.854 "data_size": 65536 00:20:19.854 }, 00:20:19.854 { 00:20:19.854 "name": "BaseBdev2", 00:20:19.854 "uuid": "32e3a011-4894-5a49-bef5-9594d9f19bbc", 00:20:19.854 "is_configured": true, 00:20:19.854 "data_offset": 0, 00:20:19.854 "data_size": 65536 00:20:19.854 }, 00:20:19.854 { 00:20:19.854 "name": "BaseBdev3", 00:20:19.854 "uuid": "02a52894-f223-55cc-83e9-5f06a416c17f", 00:20:19.854 "is_configured": true, 00:20:19.854 "data_offset": 0, 00:20:19.854 "data_size": 65536 00:20:19.854 }, 00:20:19.854 { 00:20:19.854 "name": "BaseBdev4", 00:20:19.854 "uuid": "9adcca4b-5e74-5b1e-87c7-509763095bc7", 00:20:19.854 "is_configured": true, 00:20:19.854 "data_offset": 0, 00:20:19.854 "data_size": 65536 00:20:19.854 } 00:20:19.854 ] 00:20:19.854 }' 00:20:19.854 18:18:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:19.854 18:18:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:20.421 18:18:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:20.421 18:18:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.421 18:18:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:20.421 [2024-12-06 18:18:45.723888] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:20.421 [2024-12-06 18:18:45.723942] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:20.421 [2024-12-06 18:18:45.724051] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:20.421 [2024-12-06 18:18:45.724182] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:20.421 [2024-12-06 18:18:45.724203] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:20.421 18:18:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.421 18:18:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:20:20.421 18:18:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.421 18:18:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.421 18:18:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:20.421 18:18:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.421 18:18:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:20.421 18:18:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:20.421 18:18:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:20:20.421 18:18:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:20.421 18:18:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:20.421 18:18:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:20.421 18:18:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:20.421 18:18:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:20.421 18:18:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:20.421 18:18:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:20:20.421 18:18:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:20.421 18:18:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:20.421 18:18:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:20.680 /dev/nbd0 00:20:20.680 18:18:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:20.680 18:18:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:20.680 18:18:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:20.680 18:18:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:20:20.680 18:18:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:20.680 18:18:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:20.680 18:18:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:20.680 18:18:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:20:20.680 18:18:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:20.680 18:18:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:20.680 18:18:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:20.680 1+0 records in 00:20:20.680 1+0 records out 00:20:20.680 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384261 s, 10.7 MB/s 00:20:20.680 18:18:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:20.680 18:18:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:20:20.680 18:18:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:20.680 18:18:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:20.680 18:18:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:20:20.680 18:18:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:20.680 18:18:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:20.680 18:18:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:20:20.938 /dev/nbd1 00:20:21.197 18:18:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:21.197 18:18:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:21.197 18:18:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:21.197 18:18:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:20:21.197 18:18:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:21.197 18:18:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:21.197 18:18:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:21.197 18:18:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:20:21.197 18:18:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:21.197 18:18:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:21.197 18:18:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:21.197 1+0 records in 00:20:21.197 1+0 records out 00:20:21.197 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386316 s, 10.6 MB/s 00:20:21.197 18:18:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:21.197 18:18:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:20:21.197 18:18:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:21.197 18:18:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:21.197 18:18:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:20:21.197 18:18:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:21.197 18:18:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:21.197 18:18:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:21.197 18:18:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:20:21.197 18:18:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:21.197 18:18:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:21.197 18:18:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:21.197 18:18:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:20:21.197 18:18:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:21.197 18:18:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:21.764 18:18:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:21.764 18:18:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:21.764 18:18:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:21.764 18:18:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:21.764 18:18:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:21.764 18:18:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:21.764 18:18:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:21.764 18:18:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:21.764 18:18:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:21.764 18:18:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:22.022 18:18:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:22.022 18:18:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:22.022 18:18:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:22.022 18:18:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:22.022 18:18:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:22.022 18:18:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:22.022 18:18:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:22.022 18:18:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:22.022 18:18:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:20:22.022 18:18:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 85061 00:20:22.022 18:18:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 85061 ']' 00:20:22.022 18:18:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 85061 00:20:22.022 18:18:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:20:22.022 18:18:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:22.022 18:18:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85061 00:20:22.022 killing process with pid 85061 00:20:22.022 Received shutdown signal, test time was about 60.000000 seconds 00:20:22.022 00:20:22.022 Latency(us) 00:20:22.022 [2024-12-06T18:18:47.542Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:22.022 [2024-12-06T18:18:47.542Z] =================================================================================================================== 00:20:22.022 [2024-12-06T18:18:47.542Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:22.022 18:18:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:22.022 18:18:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:22.022 18:18:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85061' 00:20:22.022 18:18:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 85061 00:20:22.022 18:18:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 85061 00:20:22.022 [2024-12-06 18:18:47.360646] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:22.589 [2024-12-06 18:18:47.800886] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:23.524 18:18:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:20:23.524 ************************************ 00:20:23.524 END TEST raid5f_rebuild_test 00:20:23.524 ************************************ 00:20:23.524 00:20:23.524 real 0m20.223s 00:20:23.524 user 0m25.145s 00:20:23.524 sys 0m2.306s 00:20:23.524 18:18:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:23.524 18:18:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.524 18:18:48 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:20:23.524 18:18:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:23.524 18:18:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:23.524 18:18:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:23.524 ************************************ 00:20:23.524 START TEST raid5f_rebuild_test_sb 00:20:23.524 ************************************ 00:20:23.524 18:18:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:20:23.524 18:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:20:23.524 18:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:20:23.524 18:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:23.524 18:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:23.524 18:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:23.524 18:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:23.524 18:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:23.524 18:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:23.524 18:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:23.524 18:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:23.524 18:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:23.524 18:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:23.524 18:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:23.524 18:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:20:23.524 18:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:23.525 18:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:23.525 18:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:20:23.525 18:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:23.525 18:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:23.525 18:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:23.525 18:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:23.525 18:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:23.525 18:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:23.525 18:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:23.525 18:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:23.525 18:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:23.525 18:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:20:23.525 18:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:20:23.525 18:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:20:23.525 18:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:20:23.525 18:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:23.525 18:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:23.525 18:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85570 00:20:23.525 18:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:23.525 18:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85570 00:20:23.525 18:18:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85570 ']' 00:20:23.525 18:18:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:23.525 18:18:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:23.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:23.525 18:18:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:23.525 18:18:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:23.525 18:18:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.525 [2024-12-06 18:18:49.012350] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:20:23.525 [2024-12-06 18:18:49.012516] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85570 ] 00:20:23.525 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:23.525 Zero copy mechanism will not be used. 00:20:23.784 [2024-12-06 18:18:49.185356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.044 [2024-12-06 18:18:49.314644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.044 [2024-12-06 18:18:49.518535] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:24.044 [2024-12-06 18:18:49.518633] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:24.612 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:24.612 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:20:24.612 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:24.612 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:24.612 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.612 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.871 BaseBdev1_malloc 00:20:24.871 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.871 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:24.871 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.871 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.871 [2024-12-06 18:18:50.157521] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:24.871 [2024-12-06 18:18:50.157608] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:24.871 [2024-12-06 18:18:50.157643] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:24.871 [2024-12-06 18:18:50.157666] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:24.871 [2024-12-06 18:18:50.160533] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:24.871 [2024-12-06 18:18:50.160606] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:24.871 BaseBdev1 00:20:24.871 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.871 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:24.871 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:24.871 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.871 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.871 BaseBdev2_malloc 00:20:24.871 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.871 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:24.871 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.871 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.871 [2024-12-06 18:18:50.205538] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:24.871 [2024-12-06 18:18:50.205623] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:24.871 [2024-12-06 18:18:50.205661] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:24.871 [2024-12-06 18:18:50.205684] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:24.871 [2024-12-06 18:18:50.208520] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:24.871 [2024-12-06 18:18:50.208578] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:24.871 BaseBdev2 00:20:24.871 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.871 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:24.871 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:24.871 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.871 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.871 BaseBdev3_malloc 00:20:24.871 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.871 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:24.871 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.871 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.871 [2024-12-06 18:18:50.263980] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:24.871 [2024-12-06 18:18:50.264060] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:24.871 [2024-12-06 18:18:50.264095] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:24.871 [2024-12-06 18:18:50.264118] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:24.871 [2024-12-06 18:18:50.266934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:24.871 [2024-12-06 18:18:50.266991] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:24.871 BaseBdev3 00:20:24.871 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.871 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:24.871 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:24.871 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.871 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.871 BaseBdev4_malloc 00:20:24.871 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.871 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:20:24.871 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.871 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.871 [2024-12-06 18:18:50.312453] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:20:24.871 [2024-12-06 18:18:50.312539] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:24.871 [2024-12-06 18:18:50.312573] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:24.871 [2024-12-06 18:18:50.312596] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:24.871 [2024-12-06 18:18:50.315354] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:24.871 [2024-12-06 18:18:50.315414] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:24.871 BaseBdev4 00:20:24.871 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.871 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:20:24.871 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.871 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.871 spare_malloc 00:20:24.871 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.871 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:24.872 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.872 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.872 spare_delay 00:20:24.872 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.872 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:24.872 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.872 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.872 [2024-12-06 18:18:50.368465] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:24.872 [2024-12-06 18:18:50.368541] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:24.872 [2024-12-06 18:18:50.368573] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:24.872 [2024-12-06 18:18:50.368595] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:24.872 [2024-12-06 18:18:50.371398] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:24.872 [2024-12-06 18:18:50.371455] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:24.872 spare 00:20:24.872 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.872 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:20:24.872 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.872 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.872 [2024-12-06 18:18:50.376531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:24.872 [2024-12-06 18:18:50.379007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:24.872 [2024-12-06 18:18:50.379112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:24.872 [2024-12-06 18:18:50.379209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:24.872 [2024-12-06 18:18:50.379489] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:24.872 [2024-12-06 18:18:50.379528] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:24.872 [2024-12-06 18:18:50.379881] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:24.872 [2024-12-06 18:18:50.386595] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:24.872 [2024-12-06 18:18:50.386631] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:24.872 [2024-12-06 18:18:50.386913] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:24.872 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.872 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:20:24.872 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:24.872 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:24.872 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:24.872 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:24.872 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:24.872 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:24.872 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:24.872 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:24.872 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:25.131 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.131 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.131 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.131 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.131 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.131 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:25.131 "name": "raid_bdev1", 00:20:25.131 "uuid": "11f00a51-cb25-4aa2-953d-7bd21e8f7d1b", 00:20:25.131 "strip_size_kb": 64, 00:20:25.131 "state": "online", 00:20:25.131 "raid_level": "raid5f", 00:20:25.131 "superblock": true, 00:20:25.131 "num_base_bdevs": 4, 00:20:25.131 "num_base_bdevs_discovered": 4, 00:20:25.131 "num_base_bdevs_operational": 4, 00:20:25.131 "base_bdevs_list": [ 00:20:25.131 { 00:20:25.131 "name": "BaseBdev1", 00:20:25.131 "uuid": "4dd35556-78e1-50ed-9364-6ceebc919ecf", 00:20:25.131 "is_configured": true, 00:20:25.131 "data_offset": 2048, 00:20:25.131 "data_size": 63488 00:20:25.131 }, 00:20:25.131 { 00:20:25.131 "name": "BaseBdev2", 00:20:25.131 "uuid": "770cda95-7b7a-53b8-af0b-6a38f9f33800", 00:20:25.131 "is_configured": true, 00:20:25.131 "data_offset": 2048, 00:20:25.131 "data_size": 63488 00:20:25.131 }, 00:20:25.131 { 00:20:25.131 "name": "BaseBdev3", 00:20:25.131 "uuid": "0a49514e-827b-56a3-a49c-5ee395bb920c", 00:20:25.131 "is_configured": true, 00:20:25.131 "data_offset": 2048, 00:20:25.131 "data_size": 63488 00:20:25.131 }, 00:20:25.131 { 00:20:25.131 "name": "BaseBdev4", 00:20:25.131 "uuid": "5315644d-2595-5312-bfee-5d4c009f858c", 00:20:25.131 "is_configured": true, 00:20:25.131 "data_offset": 2048, 00:20:25.131 "data_size": 63488 00:20:25.131 } 00:20:25.131 ] 00:20:25.131 }' 00:20:25.131 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:25.131 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.389 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:25.389 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:25.389 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.389 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.389 [2024-12-06 18:18:50.882662] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:25.389 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.649 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:20:25.649 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.649 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.649 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.649 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:25.649 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.649 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:20:25.649 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:25.649 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:25.649 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:25.649 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:25.649 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:25.649 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:25.649 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:25.649 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:25.649 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:25.649 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:20:25.649 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:25.649 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:25.649 18:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:25.908 [2024-12-06 18:18:51.258547] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:25.908 /dev/nbd0 00:20:25.908 18:18:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:25.908 18:18:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:25.908 18:18:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:25.908 18:18:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:20:25.908 18:18:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:25.909 18:18:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:25.909 18:18:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:25.909 18:18:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:20:25.909 18:18:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:25.909 18:18:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:25.909 18:18:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:25.909 1+0 records in 00:20:25.909 1+0 records out 00:20:25.909 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367067 s, 11.2 MB/s 00:20:25.909 18:18:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:25.909 18:18:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:20:25.909 18:18:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:25.909 18:18:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:25.909 18:18:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:20:25.909 18:18:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:25.909 18:18:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:25.909 18:18:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:20:25.909 18:18:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:20:25.909 18:18:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:20:25.909 18:18:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:20:26.477 496+0 records in 00:20:26.477 496+0 records out 00:20:26.477 97517568 bytes (98 MB, 93 MiB) copied, 0.616687 s, 158 MB/s 00:20:26.477 18:18:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:26.477 18:18:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:26.477 18:18:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:26.477 18:18:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:26.477 18:18:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:20:26.477 18:18:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:26.477 18:18:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:27.046 18:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:27.046 [2024-12-06 18:18:52.261674] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:27.046 18:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:27.046 18:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:27.046 18:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:27.046 18:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:27.046 18:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:27.046 18:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:27.046 18:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:27.046 18:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:27.046 18:18:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.046 18:18:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.046 [2024-12-06 18:18:52.277310] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:27.046 18:18:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.046 18:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:27.046 18:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:27.046 18:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:27.046 18:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:27.046 18:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:27.046 18:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:27.046 18:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:27.046 18:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:27.046 18:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:27.046 18:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:27.046 18:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.046 18:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:27.046 18:18:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.046 18:18:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.046 18:18:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.046 18:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:27.046 "name": "raid_bdev1", 00:20:27.046 "uuid": "11f00a51-cb25-4aa2-953d-7bd21e8f7d1b", 00:20:27.046 "strip_size_kb": 64, 00:20:27.046 "state": "online", 00:20:27.046 "raid_level": "raid5f", 00:20:27.046 "superblock": true, 00:20:27.046 "num_base_bdevs": 4, 00:20:27.046 "num_base_bdevs_discovered": 3, 00:20:27.046 "num_base_bdevs_operational": 3, 00:20:27.046 "base_bdevs_list": [ 00:20:27.046 { 00:20:27.046 "name": null, 00:20:27.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.046 "is_configured": false, 00:20:27.046 "data_offset": 0, 00:20:27.046 "data_size": 63488 00:20:27.046 }, 00:20:27.046 { 00:20:27.046 "name": "BaseBdev2", 00:20:27.046 "uuid": "770cda95-7b7a-53b8-af0b-6a38f9f33800", 00:20:27.046 "is_configured": true, 00:20:27.046 "data_offset": 2048, 00:20:27.046 "data_size": 63488 00:20:27.046 }, 00:20:27.046 { 00:20:27.046 "name": "BaseBdev3", 00:20:27.046 "uuid": "0a49514e-827b-56a3-a49c-5ee395bb920c", 00:20:27.046 "is_configured": true, 00:20:27.046 "data_offset": 2048, 00:20:27.046 "data_size": 63488 00:20:27.046 }, 00:20:27.046 { 00:20:27.046 "name": "BaseBdev4", 00:20:27.046 "uuid": "5315644d-2595-5312-bfee-5d4c009f858c", 00:20:27.046 "is_configured": true, 00:20:27.046 "data_offset": 2048, 00:20:27.046 "data_size": 63488 00:20:27.046 } 00:20:27.046 ] 00:20:27.046 }' 00:20:27.046 18:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:27.046 18:18:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.305 18:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:27.305 18:18:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.306 18:18:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.306 [2024-12-06 18:18:52.797491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:27.306 [2024-12-06 18:18:52.811667] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:20:27.306 18:18:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.306 18:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:27.306 [2024-12-06 18:18:52.820745] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:28.681 18:18:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:28.681 18:18:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:28.681 18:18:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:28.681 18:18:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:28.681 18:18:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:28.681 18:18:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.681 18:18:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.681 18:18:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.681 18:18:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.681 18:18:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.681 18:18:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:28.681 "name": "raid_bdev1", 00:20:28.681 "uuid": "11f00a51-cb25-4aa2-953d-7bd21e8f7d1b", 00:20:28.681 "strip_size_kb": 64, 00:20:28.681 "state": "online", 00:20:28.681 "raid_level": "raid5f", 00:20:28.681 "superblock": true, 00:20:28.681 "num_base_bdevs": 4, 00:20:28.681 "num_base_bdevs_discovered": 4, 00:20:28.681 "num_base_bdevs_operational": 4, 00:20:28.681 "process": { 00:20:28.681 "type": "rebuild", 00:20:28.681 "target": "spare", 00:20:28.681 "progress": { 00:20:28.681 "blocks": 17280, 00:20:28.681 "percent": 9 00:20:28.681 } 00:20:28.681 }, 00:20:28.681 "base_bdevs_list": [ 00:20:28.681 { 00:20:28.681 "name": "spare", 00:20:28.681 "uuid": "f0631493-b302-5f4e-a57e-1ec1daea85ac", 00:20:28.681 "is_configured": true, 00:20:28.681 "data_offset": 2048, 00:20:28.681 "data_size": 63488 00:20:28.681 }, 00:20:28.681 { 00:20:28.681 "name": "BaseBdev2", 00:20:28.681 "uuid": "770cda95-7b7a-53b8-af0b-6a38f9f33800", 00:20:28.681 "is_configured": true, 00:20:28.681 "data_offset": 2048, 00:20:28.681 "data_size": 63488 00:20:28.681 }, 00:20:28.681 { 00:20:28.681 "name": "BaseBdev3", 00:20:28.681 "uuid": "0a49514e-827b-56a3-a49c-5ee395bb920c", 00:20:28.681 "is_configured": true, 00:20:28.681 "data_offset": 2048, 00:20:28.681 "data_size": 63488 00:20:28.681 }, 00:20:28.681 { 00:20:28.681 "name": "BaseBdev4", 00:20:28.681 "uuid": "5315644d-2595-5312-bfee-5d4c009f858c", 00:20:28.681 "is_configured": true, 00:20:28.681 "data_offset": 2048, 00:20:28.681 "data_size": 63488 00:20:28.681 } 00:20:28.681 ] 00:20:28.681 }' 00:20:28.681 18:18:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:28.681 18:18:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:28.681 18:18:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:28.681 18:18:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:28.681 18:18:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:28.681 18:18:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.681 18:18:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.681 [2024-12-06 18:18:53.978520] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:28.681 [2024-12-06 18:18:54.034746] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:28.681 [2024-12-06 18:18:54.034860] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:28.681 [2024-12-06 18:18:54.034893] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:28.681 [2024-12-06 18:18:54.034921] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:28.681 18:18:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.681 18:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:28.681 18:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:28.681 18:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:28.682 18:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:28.682 18:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:28.682 18:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:28.682 18:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:28.682 18:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:28.682 18:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:28.682 18:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:28.682 18:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.682 18:18:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.682 18:18:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.682 18:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.682 18:18:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.682 18:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:28.682 "name": "raid_bdev1", 00:20:28.682 "uuid": "11f00a51-cb25-4aa2-953d-7bd21e8f7d1b", 00:20:28.682 "strip_size_kb": 64, 00:20:28.682 "state": "online", 00:20:28.682 "raid_level": "raid5f", 00:20:28.682 "superblock": true, 00:20:28.682 "num_base_bdevs": 4, 00:20:28.682 "num_base_bdevs_discovered": 3, 00:20:28.682 "num_base_bdevs_operational": 3, 00:20:28.682 "base_bdevs_list": [ 00:20:28.682 { 00:20:28.682 "name": null, 00:20:28.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.682 "is_configured": false, 00:20:28.682 "data_offset": 0, 00:20:28.682 "data_size": 63488 00:20:28.682 }, 00:20:28.682 { 00:20:28.682 "name": "BaseBdev2", 00:20:28.682 "uuid": "770cda95-7b7a-53b8-af0b-6a38f9f33800", 00:20:28.682 "is_configured": true, 00:20:28.682 "data_offset": 2048, 00:20:28.682 "data_size": 63488 00:20:28.682 }, 00:20:28.682 { 00:20:28.682 "name": "BaseBdev3", 00:20:28.682 "uuid": "0a49514e-827b-56a3-a49c-5ee395bb920c", 00:20:28.682 "is_configured": true, 00:20:28.682 "data_offset": 2048, 00:20:28.682 "data_size": 63488 00:20:28.682 }, 00:20:28.682 { 00:20:28.682 "name": "BaseBdev4", 00:20:28.682 "uuid": "5315644d-2595-5312-bfee-5d4c009f858c", 00:20:28.682 "is_configured": true, 00:20:28.682 "data_offset": 2048, 00:20:28.682 "data_size": 63488 00:20:28.682 } 00:20:28.682 ] 00:20:28.682 }' 00:20:28.682 18:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:28.682 18:18:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.248 18:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:29.248 18:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:29.248 18:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:29.248 18:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:29.248 18:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:29.248 18:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.248 18:18:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.248 18:18:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.248 18:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.248 18:18:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.248 18:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:29.248 "name": "raid_bdev1", 00:20:29.248 "uuid": "11f00a51-cb25-4aa2-953d-7bd21e8f7d1b", 00:20:29.248 "strip_size_kb": 64, 00:20:29.248 "state": "online", 00:20:29.248 "raid_level": "raid5f", 00:20:29.248 "superblock": true, 00:20:29.248 "num_base_bdevs": 4, 00:20:29.248 "num_base_bdevs_discovered": 3, 00:20:29.248 "num_base_bdevs_operational": 3, 00:20:29.248 "base_bdevs_list": [ 00:20:29.248 { 00:20:29.248 "name": null, 00:20:29.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.248 "is_configured": false, 00:20:29.248 "data_offset": 0, 00:20:29.248 "data_size": 63488 00:20:29.248 }, 00:20:29.248 { 00:20:29.248 "name": "BaseBdev2", 00:20:29.248 "uuid": "770cda95-7b7a-53b8-af0b-6a38f9f33800", 00:20:29.248 "is_configured": true, 00:20:29.248 "data_offset": 2048, 00:20:29.248 "data_size": 63488 00:20:29.248 }, 00:20:29.248 { 00:20:29.248 "name": "BaseBdev3", 00:20:29.248 "uuid": "0a49514e-827b-56a3-a49c-5ee395bb920c", 00:20:29.248 "is_configured": true, 00:20:29.248 "data_offset": 2048, 00:20:29.248 "data_size": 63488 00:20:29.248 }, 00:20:29.248 { 00:20:29.248 "name": "BaseBdev4", 00:20:29.248 "uuid": "5315644d-2595-5312-bfee-5d4c009f858c", 00:20:29.248 "is_configured": true, 00:20:29.248 "data_offset": 2048, 00:20:29.248 "data_size": 63488 00:20:29.248 } 00:20:29.248 ] 00:20:29.248 }' 00:20:29.248 18:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:29.248 18:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:29.248 18:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:29.248 18:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:29.248 18:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:29.248 18:18:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.248 18:18:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.248 [2024-12-06 18:18:54.763892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:29.611 [2024-12-06 18:18:54.778520] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:20:29.611 18:18:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.611 18:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:29.611 [2024-12-06 18:18:54.787668] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:30.547 18:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:30.547 18:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:30.547 18:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:30.547 18:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:30.547 18:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:30.547 18:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.547 18:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.547 18:18:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.547 18:18:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.547 18:18:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.547 18:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:30.547 "name": "raid_bdev1", 00:20:30.547 "uuid": "11f00a51-cb25-4aa2-953d-7bd21e8f7d1b", 00:20:30.547 "strip_size_kb": 64, 00:20:30.547 "state": "online", 00:20:30.547 "raid_level": "raid5f", 00:20:30.547 "superblock": true, 00:20:30.547 "num_base_bdevs": 4, 00:20:30.547 "num_base_bdevs_discovered": 4, 00:20:30.547 "num_base_bdevs_operational": 4, 00:20:30.547 "process": { 00:20:30.547 "type": "rebuild", 00:20:30.547 "target": "spare", 00:20:30.547 "progress": { 00:20:30.547 "blocks": 17280, 00:20:30.547 "percent": 9 00:20:30.547 } 00:20:30.547 }, 00:20:30.547 "base_bdevs_list": [ 00:20:30.547 { 00:20:30.547 "name": "spare", 00:20:30.547 "uuid": "f0631493-b302-5f4e-a57e-1ec1daea85ac", 00:20:30.547 "is_configured": true, 00:20:30.547 "data_offset": 2048, 00:20:30.547 "data_size": 63488 00:20:30.547 }, 00:20:30.547 { 00:20:30.547 "name": "BaseBdev2", 00:20:30.547 "uuid": "770cda95-7b7a-53b8-af0b-6a38f9f33800", 00:20:30.547 "is_configured": true, 00:20:30.547 "data_offset": 2048, 00:20:30.547 "data_size": 63488 00:20:30.547 }, 00:20:30.547 { 00:20:30.547 "name": "BaseBdev3", 00:20:30.547 "uuid": "0a49514e-827b-56a3-a49c-5ee395bb920c", 00:20:30.547 "is_configured": true, 00:20:30.547 "data_offset": 2048, 00:20:30.547 "data_size": 63488 00:20:30.547 }, 00:20:30.547 { 00:20:30.547 "name": "BaseBdev4", 00:20:30.547 "uuid": "5315644d-2595-5312-bfee-5d4c009f858c", 00:20:30.547 "is_configured": true, 00:20:30.547 "data_offset": 2048, 00:20:30.547 "data_size": 63488 00:20:30.547 } 00:20:30.547 ] 00:20:30.547 }' 00:20:30.547 18:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:30.547 18:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:30.547 18:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:30.547 18:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:30.547 18:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:30.547 18:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:30.547 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:30.547 18:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:20:30.547 18:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:20:30.547 18:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=691 00:20:30.547 18:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:30.547 18:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:30.547 18:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:30.547 18:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:30.547 18:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:30.547 18:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:30.547 18:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.547 18:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.547 18:18:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.547 18:18:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.547 18:18:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.547 18:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:30.547 "name": "raid_bdev1", 00:20:30.547 "uuid": "11f00a51-cb25-4aa2-953d-7bd21e8f7d1b", 00:20:30.547 "strip_size_kb": 64, 00:20:30.547 "state": "online", 00:20:30.547 "raid_level": "raid5f", 00:20:30.547 "superblock": true, 00:20:30.547 "num_base_bdevs": 4, 00:20:30.547 "num_base_bdevs_discovered": 4, 00:20:30.547 "num_base_bdevs_operational": 4, 00:20:30.547 "process": { 00:20:30.547 "type": "rebuild", 00:20:30.547 "target": "spare", 00:20:30.547 "progress": { 00:20:30.547 "blocks": 21120, 00:20:30.547 "percent": 11 00:20:30.547 } 00:20:30.547 }, 00:20:30.547 "base_bdevs_list": [ 00:20:30.547 { 00:20:30.547 "name": "spare", 00:20:30.547 "uuid": "f0631493-b302-5f4e-a57e-1ec1daea85ac", 00:20:30.547 "is_configured": true, 00:20:30.547 "data_offset": 2048, 00:20:30.547 "data_size": 63488 00:20:30.547 }, 00:20:30.547 { 00:20:30.547 "name": "BaseBdev2", 00:20:30.547 "uuid": "770cda95-7b7a-53b8-af0b-6a38f9f33800", 00:20:30.547 "is_configured": true, 00:20:30.547 "data_offset": 2048, 00:20:30.547 "data_size": 63488 00:20:30.547 }, 00:20:30.547 { 00:20:30.547 "name": "BaseBdev3", 00:20:30.547 "uuid": "0a49514e-827b-56a3-a49c-5ee395bb920c", 00:20:30.548 "is_configured": true, 00:20:30.548 "data_offset": 2048, 00:20:30.548 "data_size": 63488 00:20:30.548 }, 00:20:30.548 { 00:20:30.548 "name": "BaseBdev4", 00:20:30.548 "uuid": "5315644d-2595-5312-bfee-5d4c009f858c", 00:20:30.548 "is_configured": true, 00:20:30.548 "data_offset": 2048, 00:20:30.548 "data_size": 63488 00:20:30.548 } 00:20:30.548 ] 00:20:30.548 }' 00:20:30.548 18:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:30.548 18:18:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:30.548 18:18:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:30.806 18:18:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:30.806 18:18:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:31.739 18:18:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:31.739 18:18:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:31.739 18:18:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:31.739 18:18:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:31.739 18:18:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:31.739 18:18:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:31.739 18:18:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.739 18:18:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:31.739 18:18:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.739 18:18:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:31.739 18:18:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.739 18:18:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:31.739 "name": "raid_bdev1", 00:20:31.739 "uuid": "11f00a51-cb25-4aa2-953d-7bd21e8f7d1b", 00:20:31.739 "strip_size_kb": 64, 00:20:31.739 "state": "online", 00:20:31.739 "raid_level": "raid5f", 00:20:31.739 "superblock": true, 00:20:31.739 "num_base_bdevs": 4, 00:20:31.739 "num_base_bdevs_discovered": 4, 00:20:31.739 "num_base_bdevs_operational": 4, 00:20:31.739 "process": { 00:20:31.739 "type": "rebuild", 00:20:31.739 "target": "spare", 00:20:31.739 "progress": { 00:20:31.739 "blocks": 44160, 00:20:31.739 "percent": 23 00:20:31.739 } 00:20:31.739 }, 00:20:31.739 "base_bdevs_list": [ 00:20:31.739 { 00:20:31.739 "name": "spare", 00:20:31.739 "uuid": "f0631493-b302-5f4e-a57e-1ec1daea85ac", 00:20:31.739 "is_configured": true, 00:20:31.739 "data_offset": 2048, 00:20:31.739 "data_size": 63488 00:20:31.739 }, 00:20:31.739 { 00:20:31.739 "name": "BaseBdev2", 00:20:31.739 "uuid": "770cda95-7b7a-53b8-af0b-6a38f9f33800", 00:20:31.739 "is_configured": true, 00:20:31.739 "data_offset": 2048, 00:20:31.739 "data_size": 63488 00:20:31.739 }, 00:20:31.739 { 00:20:31.739 "name": "BaseBdev3", 00:20:31.739 "uuid": "0a49514e-827b-56a3-a49c-5ee395bb920c", 00:20:31.739 "is_configured": true, 00:20:31.739 "data_offset": 2048, 00:20:31.739 "data_size": 63488 00:20:31.739 }, 00:20:31.739 { 00:20:31.739 "name": "BaseBdev4", 00:20:31.739 "uuid": "5315644d-2595-5312-bfee-5d4c009f858c", 00:20:31.739 "is_configured": true, 00:20:31.739 "data_offset": 2048, 00:20:31.739 "data_size": 63488 00:20:31.739 } 00:20:31.739 ] 00:20:31.739 }' 00:20:31.739 18:18:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:31.739 18:18:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:31.739 18:18:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:31.996 18:18:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:31.996 18:18:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:32.929 18:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:32.929 18:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:32.929 18:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:32.929 18:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:32.929 18:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:32.929 18:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:32.929 18:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.929 18:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.929 18:18:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.929 18:18:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.929 18:18:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.929 18:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:32.929 "name": "raid_bdev1", 00:20:32.929 "uuid": "11f00a51-cb25-4aa2-953d-7bd21e8f7d1b", 00:20:32.929 "strip_size_kb": 64, 00:20:32.929 "state": "online", 00:20:32.929 "raid_level": "raid5f", 00:20:32.929 "superblock": true, 00:20:32.929 "num_base_bdevs": 4, 00:20:32.929 "num_base_bdevs_discovered": 4, 00:20:32.929 "num_base_bdevs_operational": 4, 00:20:32.929 "process": { 00:20:32.929 "type": "rebuild", 00:20:32.929 "target": "spare", 00:20:32.929 "progress": { 00:20:32.929 "blocks": 65280, 00:20:32.929 "percent": 34 00:20:32.929 } 00:20:32.929 }, 00:20:32.929 "base_bdevs_list": [ 00:20:32.929 { 00:20:32.929 "name": "spare", 00:20:32.929 "uuid": "f0631493-b302-5f4e-a57e-1ec1daea85ac", 00:20:32.929 "is_configured": true, 00:20:32.929 "data_offset": 2048, 00:20:32.929 "data_size": 63488 00:20:32.929 }, 00:20:32.929 { 00:20:32.929 "name": "BaseBdev2", 00:20:32.929 "uuid": "770cda95-7b7a-53b8-af0b-6a38f9f33800", 00:20:32.929 "is_configured": true, 00:20:32.929 "data_offset": 2048, 00:20:32.929 "data_size": 63488 00:20:32.929 }, 00:20:32.929 { 00:20:32.929 "name": "BaseBdev3", 00:20:32.929 "uuid": "0a49514e-827b-56a3-a49c-5ee395bb920c", 00:20:32.929 "is_configured": true, 00:20:32.929 "data_offset": 2048, 00:20:32.929 "data_size": 63488 00:20:32.929 }, 00:20:32.929 { 00:20:32.929 "name": "BaseBdev4", 00:20:32.929 "uuid": "5315644d-2595-5312-bfee-5d4c009f858c", 00:20:32.929 "is_configured": true, 00:20:32.929 "data_offset": 2048, 00:20:32.929 "data_size": 63488 00:20:32.929 } 00:20:32.929 ] 00:20:32.929 }' 00:20:32.929 18:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:32.929 18:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:32.929 18:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:33.187 18:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:33.187 18:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:34.133 18:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:34.133 18:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:34.133 18:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:34.133 18:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:34.133 18:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:34.133 18:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:34.133 18:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.133 18:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:34.133 18:18:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.133 18:18:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:34.133 18:18:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.133 18:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:34.133 "name": "raid_bdev1", 00:20:34.133 "uuid": "11f00a51-cb25-4aa2-953d-7bd21e8f7d1b", 00:20:34.133 "strip_size_kb": 64, 00:20:34.133 "state": "online", 00:20:34.133 "raid_level": "raid5f", 00:20:34.134 "superblock": true, 00:20:34.134 "num_base_bdevs": 4, 00:20:34.134 "num_base_bdevs_discovered": 4, 00:20:34.134 "num_base_bdevs_operational": 4, 00:20:34.134 "process": { 00:20:34.134 "type": "rebuild", 00:20:34.134 "target": "spare", 00:20:34.134 "progress": { 00:20:34.134 "blocks": 88320, 00:20:34.134 "percent": 46 00:20:34.134 } 00:20:34.134 }, 00:20:34.134 "base_bdevs_list": [ 00:20:34.134 { 00:20:34.134 "name": "spare", 00:20:34.134 "uuid": "f0631493-b302-5f4e-a57e-1ec1daea85ac", 00:20:34.134 "is_configured": true, 00:20:34.134 "data_offset": 2048, 00:20:34.134 "data_size": 63488 00:20:34.134 }, 00:20:34.134 { 00:20:34.134 "name": "BaseBdev2", 00:20:34.134 "uuid": "770cda95-7b7a-53b8-af0b-6a38f9f33800", 00:20:34.134 "is_configured": true, 00:20:34.134 "data_offset": 2048, 00:20:34.134 "data_size": 63488 00:20:34.134 }, 00:20:34.134 { 00:20:34.134 "name": "BaseBdev3", 00:20:34.134 "uuid": "0a49514e-827b-56a3-a49c-5ee395bb920c", 00:20:34.134 "is_configured": true, 00:20:34.134 "data_offset": 2048, 00:20:34.134 "data_size": 63488 00:20:34.134 }, 00:20:34.134 { 00:20:34.134 "name": "BaseBdev4", 00:20:34.134 "uuid": "5315644d-2595-5312-bfee-5d4c009f858c", 00:20:34.134 "is_configured": true, 00:20:34.134 "data_offset": 2048, 00:20:34.134 "data_size": 63488 00:20:34.134 } 00:20:34.134 ] 00:20:34.134 }' 00:20:34.134 18:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:34.134 18:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:34.134 18:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:34.134 18:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:34.134 18:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:35.539 18:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:35.539 18:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:35.539 18:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:35.539 18:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:35.539 18:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:35.539 18:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:35.539 18:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.539 18:19:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.539 18:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.539 18:19:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:35.539 18:19:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.539 18:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:35.539 "name": "raid_bdev1", 00:20:35.539 "uuid": "11f00a51-cb25-4aa2-953d-7bd21e8f7d1b", 00:20:35.539 "strip_size_kb": 64, 00:20:35.539 "state": "online", 00:20:35.539 "raid_level": "raid5f", 00:20:35.539 "superblock": true, 00:20:35.539 "num_base_bdevs": 4, 00:20:35.539 "num_base_bdevs_discovered": 4, 00:20:35.539 "num_base_bdevs_operational": 4, 00:20:35.539 "process": { 00:20:35.539 "type": "rebuild", 00:20:35.539 "target": "spare", 00:20:35.539 "progress": { 00:20:35.539 "blocks": 109440, 00:20:35.540 "percent": 57 00:20:35.540 } 00:20:35.540 }, 00:20:35.540 "base_bdevs_list": [ 00:20:35.540 { 00:20:35.540 "name": "spare", 00:20:35.540 "uuid": "f0631493-b302-5f4e-a57e-1ec1daea85ac", 00:20:35.540 "is_configured": true, 00:20:35.540 "data_offset": 2048, 00:20:35.540 "data_size": 63488 00:20:35.540 }, 00:20:35.540 { 00:20:35.540 "name": "BaseBdev2", 00:20:35.540 "uuid": "770cda95-7b7a-53b8-af0b-6a38f9f33800", 00:20:35.540 "is_configured": true, 00:20:35.540 "data_offset": 2048, 00:20:35.540 "data_size": 63488 00:20:35.540 }, 00:20:35.540 { 00:20:35.540 "name": "BaseBdev3", 00:20:35.540 "uuid": "0a49514e-827b-56a3-a49c-5ee395bb920c", 00:20:35.540 "is_configured": true, 00:20:35.540 "data_offset": 2048, 00:20:35.540 "data_size": 63488 00:20:35.540 }, 00:20:35.540 { 00:20:35.540 "name": "BaseBdev4", 00:20:35.540 "uuid": "5315644d-2595-5312-bfee-5d4c009f858c", 00:20:35.540 "is_configured": true, 00:20:35.540 "data_offset": 2048, 00:20:35.540 "data_size": 63488 00:20:35.540 } 00:20:35.540 ] 00:20:35.540 }' 00:20:35.540 18:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:35.540 18:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:35.540 18:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:35.540 18:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:35.540 18:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:36.477 18:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:36.477 18:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:36.477 18:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:36.477 18:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:36.477 18:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:36.477 18:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:36.477 18:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.477 18:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:36.477 18:19:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.477 18:19:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:36.477 18:19:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.477 18:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:36.477 "name": "raid_bdev1", 00:20:36.477 "uuid": "11f00a51-cb25-4aa2-953d-7bd21e8f7d1b", 00:20:36.477 "strip_size_kb": 64, 00:20:36.477 "state": "online", 00:20:36.477 "raid_level": "raid5f", 00:20:36.477 "superblock": true, 00:20:36.477 "num_base_bdevs": 4, 00:20:36.477 "num_base_bdevs_discovered": 4, 00:20:36.477 "num_base_bdevs_operational": 4, 00:20:36.477 "process": { 00:20:36.477 "type": "rebuild", 00:20:36.477 "target": "spare", 00:20:36.477 "progress": { 00:20:36.477 "blocks": 132480, 00:20:36.477 "percent": 69 00:20:36.477 } 00:20:36.477 }, 00:20:36.477 "base_bdevs_list": [ 00:20:36.477 { 00:20:36.477 "name": "spare", 00:20:36.477 "uuid": "f0631493-b302-5f4e-a57e-1ec1daea85ac", 00:20:36.477 "is_configured": true, 00:20:36.477 "data_offset": 2048, 00:20:36.477 "data_size": 63488 00:20:36.477 }, 00:20:36.477 { 00:20:36.477 "name": "BaseBdev2", 00:20:36.477 "uuid": "770cda95-7b7a-53b8-af0b-6a38f9f33800", 00:20:36.477 "is_configured": true, 00:20:36.477 "data_offset": 2048, 00:20:36.477 "data_size": 63488 00:20:36.477 }, 00:20:36.477 { 00:20:36.477 "name": "BaseBdev3", 00:20:36.477 "uuid": "0a49514e-827b-56a3-a49c-5ee395bb920c", 00:20:36.477 "is_configured": true, 00:20:36.477 "data_offset": 2048, 00:20:36.477 "data_size": 63488 00:20:36.477 }, 00:20:36.477 { 00:20:36.477 "name": "BaseBdev4", 00:20:36.477 "uuid": "5315644d-2595-5312-bfee-5d4c009f858c", 00:20:36.477 "is_configured": true, 00:20:36.477 "data_offset": 2048, 00:20:36.477 "data_size": 63488 00:20:36.477 } 00:20:36.477 ] 00:20:36.477 }' 00:20:36.477 18:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:36.477 18:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:36.477 18:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:36.477 18:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:36.477 18:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:37.854 18:19:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:37.854 18:19:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:37.854 18:19:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:37.854 18:19:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:37.854 18:19:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:37.854 18:19:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:37.854 18:19:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.854 18:19:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.854 18:19:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.854 18:19:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:37.854 18:19:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.854 18:19:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:37.854 "name": "raid_bdev1", 00:20:37.854 "uuid": "11f00a51-cb25-4aa2-953d-7bd21e8f7d1b", 00:20:37.854 "strip_size_kb": 64, 00:20:37.854 "state": "online", 00:20:37.854 "raid_level": "raid5f", 00:20:37.854 "superblock": true, 00:20:37.854 "num_base_bdevs": 4, 00:20:37.854 "num_base_bdevs_discovered": 4, 00:20:37.854 "num_base_bdevs_operational": 4, 00:20:37.854 "process": { 00:20:37.854 "type": "rebuild", 00:20:37.854 "target": "spare", 00:20:37.854 "progress": { 00:20:37.854 "blocks": 153600, 00:20:37.854 "percent": 80 00:20:37.854 } 00:20:37.854 }, 00:20:37.854 "base_bdevs_list": [ 00:20:37.854 { 00:20:37.854 "name": "spare", 00:20:37.854 "uuid": "f0631493-b302-5f4e-a57e-1ec1daea85ac", 00:20:37.854 "is_configured": true, 00:20:37.854 "data_offset": 2048, 00:20:37.854 "data_size": 63488 00:20:37.854 }, 00:20:37.854 { 00:20:37.854 "name": "BaseBdev2", 00:20:37.854 "uuid": "770cda95-7b7a-53b8-af0b-6a38f9f33800", 00:20:37.854 "is_configured": true, 00:20:37.854 "data_offset": 2048, 00:20:37.854 "data_size": 63488 00:20:37.854 }, 00:20:37.854 { 00:20:37.854 "name": "BaseBdev3", 00:20:37.854 "uuid": "0a49514e-827b-56a3-a49c-5ee395bb920c", 00:20:37.854 "is_configured": true, 00:20:37.854 "data_offset": 2048, 00:20:37.854 "data_size": 63488 00:20:37.854 }, 00:20:37.854 { 00:20:37.854 "name": "BaseBdev4", 00:20:37.854 "uuid": "5315644d-2595-5312-bfee-5d4c009f858c", 00:20:37.854 "is_configured": true, 00:20:37.854 "data_offset": 2048, 00:20:37.854 "data_size": 63488 00:20:37.854 } 00:20:37.854 ] 00:20:37.854 }' 00:20:37.854 18:19:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:37.854 18:19:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:37.854 18:19:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:37.854 18:19:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:37.854 18:19:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:38.791 18:19:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:38.791 18:19:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:38.791 18:19:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:38.791 18:19:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:38.791 18:19:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:38.791 18:19:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:38.791 18:19:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.791 18:19:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:38.791 18:19:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.791 18:19:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:38.791 18:19:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.791 18:19:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:38.791 "name": "raid_bdev1", 00:20:38.791 "uuid": "11f00a51-cb25-4aa2-953d-7bd21e8f7d1b", 00:20:38.791 "strip_size_kb": 64, 00:20:38.791 "state": "online", 00:20:38.791 "raid_level": "raid5f", 00:20:38.791 "superblock": true, 00:20:38.791 "num_base_bdevs": 4, 00:20:38.791 "num_base_bdevs_discovered": 4, 00:20:38.791 "num_base_bdevs_operational": 4, 00:20:38.791 "process": { 00:20:38.791 "type": "rebuild", 00:20:38.791 "target": "spare", 00:20:38.791 "progress": { 00:20:38.791 "blocks": 176640, 00:20:38.791 "percent": 92 00:20:38.791 } 00:20:38.791 }, 00:20:38.791 "base_bdevs_list": [ 00:20:38.791 { 00:20:38.791 "name": "spare", 00:20:38.791 "uuid": "f0631493-b302-5f4e-a57e-1ec1daea85ac", 00:20:38.791 "is_configured": true, 00:20:38.791 "data_offset": 2048, 00:20:38.791 "data_size": 63488 00:20:38.791 }, 00:20:38.791 { 00:20:38.791 "name": "BaseBdev2", 00:20:38.791 "uuid": "770cda95-7b7a-53b8-af0b-6a38f9f33800", 00:20:38.791 "is_configured": true, 00:20:38.791 "data_offset": 2048, 00:20:38.791 "data_size": 63488 00:20:38.791 }, 00:20:38.791 { 00:20:38.791 "name": "BaseBdev3", 00:20:38.791 "uuid": "0a49514e-827b-56a3-a49c-5ee395bb920c", 00:20:38.791 "is_configured": true, 00:20:38.791 "data_offset": 2048, 00:20:38.791 "data_size": 63488 00:20:38.791 }, 00:20:38.791 { 00:20:38.791 "name": "BaseBdev4", 00:20:38.791 "uuid": "5315644d-2595-5312-bfee-5d4c009f858c", 00:20:38.791 "is_configured": true, 00:20:38.791 "data_offset": 2048, 00:20:38.791 "data_size": 63488 00:20:38.791 } 00:20:38.791 ] 00:20:38.791 }' 00:20:38.791 18:19:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:38.791 18:19:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:38.791 18:19:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:38.791 18:19:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:38.791 18:19:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:39.727 [2024-12-06 18:19:04.903715] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:39.727 [2024-12-06 18:19:04.903858] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:39.727 [2024-12-06 18:19:04.904065] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:39.984 18:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:39.984 18:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:39.984 18:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:39.984 18:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:39.984 18:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:39.984 18:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:39.984 18:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.984 18:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.984 18:19:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.984 18:19:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:39.984 18:19:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.984 18:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:39.984 "name": "raid_bdev1", 00:20:39.984 "uuid": "11f00a51-cb25-4aa2-953d-7bd21e8f7d1b", 00:20:39.984 "strip_size_kb": 64, 00:20:39.984 "state": "online", 00:20:39.984 "raid_level": "raid5f", 00:20:39.984 "superblock": true, 00:20:39.984 "num_base_bdevs": 4, 00:20:39.984 "num_base_bdevs_discovered": 4, 00:20:39.984 "num_base_bdevs_operational": 4, 00:20:39.984 "base_bdevs_list": [ 00:20:39.984 { 00:20:39.984 "name": "spare", 00:20:39.984 "uuid": "f0631493-b302-5f4e-a57e-1ec1daea85ac", 00:20:39.984 "is_configured": true, 00:20:39.984 "data_offset": 2048, 00:20:39.984 "data_size": 63488 00:20:39.984 }, 00:20:39.984 { 00:20:39.984 "name": "BaseBdev2", 00:20:39.984 "uuid": "770cda95-7b7a-53b8-af0b-6a38f9f33800", 00:20:39.984 "is_configured": true, 00:20:39.984 "data_offset": 2048, 00:20:39.984 "data_size": 63488 00:20:39.984 }, 00:20:39.984 { 00:20:39.984 "name": "BaseBdev3", 00:20:39.984 "uuid": "0a49514e-827b-56a3-a49c-5ee395bb920c", 00:20:39.984 "is_configured": true, 00:20:39.984 "data_offset": 2048, 00:20:39.984 "data_size": 63488 00:20:39.984 }, 00:20:39.984 { 00:20:39.984 "name": "BaseBdev4", 00:20:39.984 "uuid": "5315644d-2595-5312-bfee-5d4c009f858c", 00:20:39.984 "is_configured": true, 00:20:39.984 "data_offset": 2048, 00:20:39.984 "data_size": 63488 00:20:39.984 } 00:20:39.984 ] 00:20:39.984 }' 00:20:39.984 18:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:39.984 18:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:39.985 18:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:39.985 18:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:39.985 18:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:20:39.985 18:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:39.985 18:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:39.985 18:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:39.985 18:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:39.985 18:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:39.985 18:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.985 18:19:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.985 18:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.985 18:19:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:39.985 18:19:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.243 18:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:40.243 "name": "raid_bdev1", 00:20:40.243 "uuid": "11f00a51-cb25-4aa2-953d-7bd21e8f7d1b", 00:20:40.243 "strip_size_kb": 64, 00:20:40.243 "state": "online", 00:20:40.243 "raid_level": "raid5f", 00:20:40.243 "superblock": true, 00:20:40.243 "num_base_bdevs": 4, 00:20:40.243 "num_base_bdevs_discovered": 4, 00:20:40.243 "num_base_bdevs_operational": 4, 00:20:40.243 "base_bdevs_list": [ 00:20:40.243 { 00:20:40.243 "name": "spare", 00:20:40.243 "uuid": "f0631493-b302-5f4e-a57e-1ec1daea85ac", 00:20:40.243 "is_configured": true, 00:20:40.243 "data_offset": 2048, 00:20:40.243 "data_size": 63488 00:20:40.243 }, 00:20:40.243 { 00:20:40.243 "name": "BaseBdev2", 00:20:40.243 "uuid": "770cda95-7b7a-53b8-af0b-6a38f9f33800", 00:20:40.243 "is_configured": true, 00:20:40.243 "data_offset": 2048, 00:20:40.243 "data_size": 63488 00:20:40.243 }, 00:20:40.243 { 00:20:40.243 "name": "BaseBdev3", 00:20:40.243 "uuid": "0a49514e-827b-56a3-a49c-5ee395bb920c", 00:20:40.243 "is_configured": true, 00:20:40.243 "data_offset": 2048, 00:20:40.243 "data_size": 63488 00:20:40.243 }, 00:20:40.243 { 00:20:40.243 "name": "BaseBdev4", 00:20:40.243 "uuid": "5315644d-2595-5312-bfee-5d4c009f858c", 00:20:40.243 "is_configured": true, 00:20:40.243 "data_offset": 2048, 00:20:40.243 "data_size": 63488 00:20:40.243 } 00:20:40.243 ] 00:20:40.243 }' 00:20:40.243 18:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:40.243 18:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:40.243 18:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:40.243 18:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:40.243 18:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:20:40.243 18:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:40.243 18:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:40.243 18:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:40.243 18:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:40.243 18:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:40.243 18:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:40.243 18:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:40.243 18:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:40.243 18:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:40.243 18:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.243 18:19:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.243 18:19:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:40.243 18:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.243 18:19:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.243 18:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:40.243 "name": "raid_bdev1", 00:20:40.243 "uuid": "11f00a51-cb25-4aa2-953d-7bd21e8f7d1b", 00:20:40.243 "strip_size_kb": 64, 00:20:40.243 "state": "online", 00:20:40.243 "raid_level": "raid5f", 00:20:40.243 "superblock": true, 00:20:40.243 "num_base_bdevs": 4, 00:20:40.243 "num_base_bdevs_discovered": 4, 00:20:40.243 "num_base_bdevs_operational": 4, 00:20:40.243 "base_bdevs_list": [ 00:20:40.243 { 00:20:40.243 "name": "spare", 00:20:40.243 "uuid": "f0631493-b302-5f4e-a57e-1ec1daea85ac", 00:20:40.243 "is_configured": true, 00:20:40.243 "data_offset": 2048, 00:20:40.243 "data_size": 63488 00:20:40.243 }, 00:20:40.243 { 00:20:40.243 "name": "BaseBdev2", 00:20:40.243 "uuid": "770cda95-7b7a-53b8-af0b-6a38f9f33800", 00:20:40.243 "is_configured": true, 00:20:40.243 "data_offset": 2048, 00:20:40.243 "data_size": 63488 00:20:40.243 }, 00:20:40.243 { 00:20:40.243 "name": "BaseBdev3", 00:20:40.243 "uuid": "0a49514e-827b-56a3-a49c-5ee395bb920c", 00:20:40.243 "is_configured": true, 00:20:40.243 "data_offset": 2048, 00:20:40.243 "data_size": 63488 00:20:40.243 }, 00:20:40.243 { 00:20:40.243 "name": "BaseBdev4", 00:20:40.243 "uuid": "5315644d-2595-5312-bfee-5d4c009f858c", 00:20:40.243 "is_configured": true, 00:20:40.243 "data_offset": 2048, 00:20:40.243 "data_size": 63488 00:20:40.243 } 00:20:40.243 ] 00:20:40.243 }' 00:20:40.243 18:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:40.243 18:19:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:40.809 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:40.809 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.809 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:40.809 [2024-12-06 18:19:06.128932] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:40.809 [2024-12-06 18:19:06.128981] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:40.809 [2024-12-06 18:19:06.129237] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:40.809 [2024-12-06 18:19:06.129493] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:40.809 [2024-12-06 18:19:06.129524] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:40.809 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.809 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.809 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.809 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:40.809 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:20:40.809 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.809 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:40.809 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:40.809 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:20:40.809 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:40.809 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:40.809 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:40.809 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:40.809 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:40.809 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:40.809 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:20:40.809 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:40.809 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:40.809 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:41.067 /dev/nbd0 00:20:41.067 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:41.067 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:41.067 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:41.067 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:20:41.067 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:41.067 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:41.067 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:41.067 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:20:41.067 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:41.067 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:41.067 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:41.067 1+0 records in 00:20:41.067 1+0 records out 00:20:41.067 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267805 s, 15.3 MB/s 00:20:41.067 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:41.067 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:20:41.067 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:41.067 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:41.067 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:20:41.067 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:41.067 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:41.067 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:20:41.326 /dev/nbd1 00:20:41.584 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:41.584 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:41.584 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:41.584 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:20:41.585 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:41.585 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:41.585 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:41.585 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:20:41.585 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:41.585 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:41.585 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:41.585 1+0 records in 00:20:41.585 1+0 records out 00:20:41.585 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000359107 s, 11.4 MB/s 00:20:41.585 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:41.585 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:20:41.585 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:41.585 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:41.585 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:20:41.585 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:41.585 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:41.585 18:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:41.585 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:20:41.585 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:41.585 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:41.585 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:41.585 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:20:41.585 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:41.585 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:41.843 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:41.843 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:41.843 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:41.843 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:41.843 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:41.843 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:42.102 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:42.102 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:42.102 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:42.102 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:42.360 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:42.360 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:42.360 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:42.360 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:42.360 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:42.360 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:42.360 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:42.360 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:42.360 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:42.360 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:42.360 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.360 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.360 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.360 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:42.360 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.360 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.360 [2024-12-06 18:19:07.654303] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:42.360 [2024-12-06 18:19:07.654361] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:42.360 [2024-12-06 18:19:07.654411] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:20:42.360 [2024-12-06 18:19:07.654433] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:42.360 [2024-12-06 18:19:07.657821] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:42.360 [2024-12-06 18:19:07.657866] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:42.360 [2024-12-06 18:19:07.658010] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:42.360 [2024-12-06 18:19:07.658104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:42.360 [2024-12-06 18:19:07.658280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:42.360 [2024-12-06 18:19:07.658494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:42.360 [2024-12-06 18:19:07.658636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:42.360 spare 00:20:42.360 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.360 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:42.360 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.360 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.360 [2024-12-06 18:19:07.758834] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:42.360 [2024-12-06 18:19:07.758955] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:42.360 [2024-12-06 18:19:07.759454] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:20:42.360 [2024-12-06 18:19:07.766378] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:42.360 [2024-12-06 18:19:07.766409] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:42.360 [2024-12-06 18:19:07.766736] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:42.360 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.360 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:20:42.360 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:42.360 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:42.360 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:42.360 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:42.360 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:42.360 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:42.360 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:42.360 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:42.360 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:42.360 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.360 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.360 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.360 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.360 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.360 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:42.360 "name": "raid_bdev1", 00:20:42.360 "uuid": "11f00a51-cb25-4aa2-953d-7bd21e8f7d1b", 00:20:42.360 "strip_size_kb": 64, 00:20:42.360 "state": "online", 00:20:42.360 "raid_level": "raid5f", 00:20:42.360 "superblock": true, 00:20:42.360 "num_base_bdevs": 4, 00:20:42.360 "num_base_bdevs_discovered": 4, 00:20:42.360 "num_base_bdevs_operational": 4, 00:20:42.360 "base_bdevs_list": [ 00:20:42.360 { 00:20:42.360 "name": "spare", 00:20:42.360 "uuid": "f0631493-b302-5f4e-a57e-1ec1daea85ac", 00:20:42.360 "is_configured": true, 00:20:42.360 "data_offset": 2048, 00:20:42.360 "data_size": 63488 00:20:42.360 }, 00:20:42.360 { 00:20:42.360 "name": "BaseBdev2", 00:20:42.360 "uuid": "770cda95-7b7a-53b8-af0b-6a38f9f33800", 00:20:42.360 "is_configured": true, 00:20:42.361 "data_offset": 2048, 00:20:42.361 "data_size": 63488 00:20:42.361 }, 00:20:42.361 { 00:20:42.361 "name": "BaseBdev3", 00:20:42.361 "uuid": "0a49514e-827b-56a3-a49c-5ee395bb920c", 00:20:42.361 "is_configured": true, 00:20:42.361 "data_offset": 2048, 00:20:42.361 "data_size": 63488 00:20:42.361 }, 00:20:42.361 { 00:20:42.361 "name": "BaseBdev4", 00:20:42.361 "uuid": "5315644d-2595-5312-bfee-5d4c009f858c", 00:20:42.361 "is_configured": true, 00:20:42.361 "data_offset": 2048, 00:20:42.361 "data_size": 63488 00:20:42.361 } 00:20:42.361 ] 00:20:42.361 }' 00:20:42.361 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:42.361 18:19:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.928 18:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:42.928 18:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:42.928 18:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:42.928 18:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:42.928 18:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:42.928 18:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.928 18:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.928 18:19:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.928 18:19:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.928 18:19:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.928 18:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:42.928 "name": "raid_bdev1", 00:20:42.928 "uuid": "11f00a51-cb25-4aa2-953d-7bd21e8f7d1b", 00:20:42.928 "strip_size_kb": 64, 00:20:42.928 "state": "online", 00:20:42.928 "raid_level": "raid5f", 00:20:42.928 "superblock": true, 00:20:42.928 "num_base_bdevs": 4, 00:20:42.928 "num_base_bdevs_discovered": 4, 00:20:42.928 "num_base_bdevs_operational": 4, 00:20:42.928 "base_bdevs_list": [ 00:20:42.928 { 00:20:42.928 "name": "spare", 00:20:42.928 "uuid": "f0631493-b302-5f4e-a57e-1ec1daea85ac", 00:20:42.928 "is_configured": true, 00:20:42.928 "data_offset": 2048, 00:20:42.928 "data_size": 63488 00:20:42.928 }, 00:20:42.928 { 00:20:42.928 "name": "BaseBdev2", 00:20:42.928 "uuid": "770cda95-7b7a-53b8-af0b-6a38f9f33800", 00:20:42.928 "is_configured": true, 00:20:42.928 "data_offset": 2048, 00:20:42.928 "data_size": 63488 00:20:42.928 }, 00:20:42.928 { 00:20:42.928 "name": "BaseBdev3", 00:20:42.928 "uuid": "0a49514e-827b-56a3-a49c-5ee395bb920c", 00:20:42.928 "is_configured": true, 00:20:42.928 "data_offset": 2048, 00:20:42.928 "data_size": 63488 00:20:42.928 }, 00:20:42.928 { 00:20:42.928 "name": "BaseBdev4", 00:20:42.928 "uuid": "5315644d-2595-5312-bfee-5d4c009f858c", 00:20:42.928 "is_configured": true, 00:20:42.928 "data_offset": 2048, 00:20:42.928 "data_size": 63488 00:20:42.928 } 00:20:42.928 ] 00:20:42.928 }' 00:20:42.928 18:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:42.928 18:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:42.928 18:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:43.187 18:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:43.188 18:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:43.188 18:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:43.188 18:19:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.188 18:19:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.188 18:19:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.188 18:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:43.188 18:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:43.188 18:19:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.188 18:19:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.188 [2024-12-06 18:19:08.523003] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:43.188 18:19:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.188 18:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:43.188 18:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:43.188 18:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:43.188 18:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:43.188 18:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:43.188 18:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:43.188 18:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:43.188 18:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:43.188 18:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:43.188 18:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:43.188 18:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:43.188 18:19:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.188 18:19:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.188 18:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:43.188 18:19:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.188 18:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:43.188 "name": "raid_bdev1", 00:20:43.188 "uuid": "11f00a51-cb25-4aa2-953d-7bd21e8f7d1b", 00:20:43.188 "strip_size_kb": 64, 00:20:43.188 "state": "online", 00:20:43.188 "raid_level": "raid5f", 00:20:43.188 "superblock": true, 00:20:43.188 "num_base_bdevs": 4, 00:20:43.188 "num_base_bdevs_discovered": 3, 00:20:43.188 "num_base_bdevs_operational": 3, 00:20:43.188 "base_bdevs_list": [ 00:20:43.188 { 00:20:43.188 "name": null, 00:20:43.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.188 "is_configured": false, 00:20:43.188 "data_offset": 0, 00:20:43.188 "data_size": 63488 00:20:43.188 }, 00:20:43.188 { 00:20:43.188 "name": "BaseBdev2", 00:20:43.188 "uuid": "770cda95-7b7a-53b8-af0b-6a38f9f33800", 00:20:43.188 "is_configured": true, 00:20:43.188 "data_offset": 2048, 00:20:43.188 "data_size": 63488 00:20:43.188 }, 00:20:43.188 { 00:20:43.188 "name": "BaseBdev3", 00:20:43.188 "uuid": "0a49514e-827b-56a3-a49c-5ee395bb920c", 00:20:43.188 "is_configured": true, 00:20:43.188 "data_offset": 2048, 00:20:43.188 "data_size": 63488 00:20:43.188 }, 00:20:43.188 { 00:20:43.188 "name": "BaseBdev4", 00:20:43.188 "uuid": "5315644d-2595-5312-bfee-5d4c009f858c", 00:20:43.188 "is_configured": true, 00:20:43.188 "data_offset": 2048, 00:20:43.188 "data_size": 63488 00:20:43.188 } 00:20:43.188 ] 00:20:43.188 }' 00:20:43.188 18:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:43.188 18:19:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.756 18:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:43.756 18:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.756 18:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.756 [2024-12-06 18:19:09.051178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:43.756 [2024-12-06 18:19:09.051417] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:43.756 [2024-12-06 18:19:09.051448] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:43.756 [2024-12-06 18:19:09.051495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:43.756 [2024-12-06 18:19:09.065294] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:20:43.756 18:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.756 18:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:43.756 [2024-12-06 18:19:09.074321] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:44.695 18:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:44.695 18:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:44.695 18:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:44.695 18:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:44.695 18:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:44.695 18:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:44.695 18:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:44.695 18:19:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.695 18:19:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.695 18:19:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.695 18:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:44.695 "name": "raid_bdev1", 00:20:44.695 "uuid": "11f00a51-cb25-4aa2-953d-7bd21e8f7d1b", 00:20:44.695 "strip_size_kb": 64, 00:20:44.695 "state": "online", 00:20:44.695 "raid_level": "raid5f", 00:20:44.695 "superblock": true, 00:20:44.695 "num_base_bdevs": 4, 00:20:44.695 "num_base_bdevs_discovered": 4, 00:20:44.695 "num_base_bdevs_operational": 4, 00:20:44.695 "process": { 00:20:44.695 "type": "rebuild", 00:20:44.695 "target": "spare", 00:20:44.695 "progress": { 00:20:44.695 "blocks": 17280, 00:20:44.695 "percent": 9 00:20:44.695 } 00:20:44.695 }, 00:20:44.695 "base_bdevs_list": [ 00:20:44.695 { 00:20:44.695 "name": "spare", 00:20:44.695 "uuid": "f0631493-b302-5f4e-a57e-1ec1daea85ac", 00:20:44.695 "is_configured": true, 00:20:44.695 "data_offset": 2048, 00:20:44.695 "data_size": 63488 00:20:44.695 }, 00:20:44.695 { 00:20:44.695 "name": "BaseBdev2", 00:20:44.695 "uuid": "770cda95-7b7a-53b8-af0b-6a38f9f33800", 00:20:44.695 "is_configured": true, 00:20:44.695 "data_offset": 2048, 00:20:44.695 "data_size": 63488 00:20:44.695 }, 00:20:44.695 { 00:20:44.695 "name": "BaseBdev3", 00:20:44.695 "uuid": "0a49514e-827b-56a3-a49c-5ee395bb920c", 00:20:44.695 "is_configured": true, 00:20:44.695 "data_offset": 2048, 00:20:44.695 "data_size": 63488 00:20:44.695 }, 00:20:44.695 { 00:20:44.695 "name": "BaseBdev4", 00:20:44.695 "uuid": "5315644d-2595-5312-bfee-5d4c009f858c", 00:20:44.695 "is_configured": true, 00:20:44.695 "data_offset": 2048, 00:20:44.695 "data_size": 63488 00:20:44.695 } 00:20:44.695 ] 00:20:44.695 }' 00:20:44.695 18:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:44.695 18:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:44.695 18:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:44.954 18:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:44.954 18:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:44.954 18:19:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.954 18:19:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.954 [2024-12-06 18:19:10.227359] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:44.954 [2024-12-06 18:19:10.287321] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:44.954 [2024-12-06 18:19:10.287413] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:44.954 [2024-12-06 18:19:10.287440] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:44.954 [2024-12-06 18:19:10.287457] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:44.954 18:19:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.954 18:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:44.954 18:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:44.954 18:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:44.954 18:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:44.954 18:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:44.954 18:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:44.954 18:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:44.954 18:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:44.954 18:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:44.955 18:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:44.955 18:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:44.955 18:19:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.955 18:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:44.955 18:19:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.955 18:19:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.955 18:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:44.955 "name": "raid_bdev1", 00:20:44.955 "uuid": "11f00a51-cb25-4aa2-953d-7bd21e8f7d1b", 00:20:44.955 "strip_size_kb": 64, 00:20:44.955 "state": "online", 00:20:44.955 "raid_level": "raid5f", 00:20:44.955 "superblock": true, 00:20:44.955 "num_base_bdevs": 4, 00:20:44.955 "num_base_bdevs_discovered": 3, 00:20:44.955 "num_base_bdevs_operational": 3, 00:20:44.955 "base_bdevs_list": [ 00:20:44.955 { 00:20:44.955 "name": null, 00:20:44.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:44.955 "is_configured": false, 00:20:44.955 "data_offset": 0, 00:20:44.955 "data_size": 63488 00:20:44.955 }, 00:20:44.955 { 00:20:44.955 "name": "BaseBdev2", 00:20:44.955 "uuid": "770cda95-7b7a-53b8-af0b-6a38f9f33800", 00:20:44.955 "is_configured": true, 00:20:44.955 "data_offset": 2048, 00:20:44.955 "data_size": 63488 00:20:44.955 }, 00:20:44.955 { 00:20:44.955 "name": "BaseBdev3", 00:20:44.955 "uuid": "0a49514e-827b-56a3-a49c-5ee395bb920c", 00:20:44.955 "is_configured": true, 00:20:44.955 "data_offset": 2048, 00:20:44.955 "data_size": 63488 00:20:44.955 }, 00:20:44.955 { 00:20:44.955 "name": "BaseBdev4", 00:20:44.955 "uuid": "5315644d-2595-5312-bfee-5d4c009f858c", 00:20:44.955 "is_configured": true, 00:20:44.955 "data_offset": 2048, 00:20:44.955 "data_size": 63488 00:20:44.955 } 00:20:44.955 ] 00:20:44.955 }' 00:20:44.955 18:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:44.955 18:19:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.523 18:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:45.523 18:19:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.523 18:19:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.523 [2024-12-06 18:19:10.822564] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:45.523 [2024-12-06 18:19:10.822649] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:45.523 [2024-12-06 18:19:10.822684] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:20:45.523 [2024-12-06 18:19:10.822704] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:45.523 [2024-12-06 18:19:10.823342] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:45.523 [2024-12-06 18:19:10.823384] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:45.523 [2024-12-06 18:19:10.823500] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:45.523 [2024-12-06 18:19:10.823525] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:45.523 [2024-12-06 18:19:10.823539] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:45.523 [2024-12-06 18:19:10.823588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:45.523 [2024-12-06 18:19:10.837112] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:20:45.523 spare 00:20:45.523 18:19:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.523 18:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:45.523 [2024-12-06 18:19:10.846016] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:46.457 18:19:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:46.457 18:19:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:46.457 18:19:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:46.457 18:19:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:46.457 18:19:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:46.457 18:19:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:46.457 18:19:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:46.457 18:19:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.457 18:19:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.457 18:19:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.457 18:19:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:46.457 "name": "raid_bdev1", 00:20:46.457 "uuid": "11f00a51-cb25-4aa2-953d-7bd21e8f7d1b", 00:20:46.457 "strip_size_kb": 64, 00:20:46.457 "state": "online", 00:20:46.457 "raid_level": "raid5f", 00:20:46.457 "superblock": true, 00:20:46.457 "num_base_bdevs": 4, 00:20:46.457 "num_base_bdevs_discovered": 4, 00:20:46.457 "num_base_bdevs_operational": 4, 00:20:46.457 "process": { 00:20:46.457 "type": "rebuild", 00:20:46.457 "target": "spare", 00:20:46.457 "progress": { 00:20:46.457 "blocks": 17280, 00:20:46.457 "percent": 9 00:20:46.457 } 00:20:46.457 }, 00:20:46.457 "base_bdevs_list": [ 00:20:46.457 { 00:20:46.457 "name": "spare", 00:20:46.457 "uuid": "f0631493-b302-5f4e-a57e-1ec1daea85ac", 00:20:46.457 "is_configured": true, 00:20:46.457 "data_offset": 2048, 00:20:46.457 "data_size": 63488 00:20:46.457 }, 00:20:46.457 { 00:20:46.457 "name": "BaseBdev2", 00:20:46.457 "uuid": "770cda95-7b7a-53b8-af0b-6a38f9f33800", 00:20:46.457 "is_configured": true, 00:20:46.457 "data_offset": 2048, 00:20:46.457 "data_size": 63488 00:20:46.457 }, 00:20:46.457 { 00:20:46.457 "name": "BaseBdev3", 00:20:46.457 "uuid": "0a49514e-827b-56a3-a49c-5ee395bb920c", 00:20:46.457 "is_configured": true, 00:20:46.457 "data_offset": 2048, 00:20:46.457 "data_size": 63488 00:20:46.457 }, 00:20:46.457 { 00:20:46.457 "name": "BaseBdev4", 00:20:46.457 "uuid": "5315644d-2595-5312-bfee-5d4c009f858c", 00:20:46.457 "is_configured": true, 00:20:46.457 "data_offset": 2048, 00:20:46.457 "data_size": 63488 00:20:46.457 } 00:20:46.457 ] 00:20:46.457 }' 00:20:46.457 18:19:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:46.457 18:19:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:46.457 18:19:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:46.714 18:19:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:46.714 18:19:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:46.714 18:19:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.714 18:19:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.714 [2024-12-06 18:19:11.995703] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:46.714 [2024-12-06 18:19:12.058853] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:46.714 [2024-12-06 18:19:12.058939] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:46.714 [2024-12-06 18:19:12.058968] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:46.714 [2024-12-06 18:19:12.058979] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:46.714 18:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.714 18:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:46.715 18:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:46.715 18:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:46.715 18:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:46.715 18:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:46.715 18:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:46.715 18:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:46.715 18:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:46.715 18:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:46.715 18:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:46.715 18:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:46.715 18:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:46.715 18:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.715 18:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.715 18:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.715 18:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:46.715 "name": "raid_bdev1", 00:20:46.715 "uuid": "11f00a51-cb25-4aa2-953d-7bd21e8f7d1b", 00:20:46.715 "strip_size_kb": 64, 00:20:46.715 "state": "online", 00:20:46.715 "raid_level": "raid5f", 00:20:46.715 "superblock": true, 00:20:46.715 "num_base_bdevs": 4, 00:20:46.715 "num_base_bdevs_discovered": 3, 00:20:46.715 "num_base_bdevs_operational": 3, 00:20:46.715 "base_bdevs_list": [ 00:20:46.715 { 00:20:46.715 "name": null, 00:20:46.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:46.715 "is_configured": false, 00:20:46.715 "data_offset": 0, 00:20:46.715 "data_size": 63488 00:20:46.715 }, 00:20:46.715 { 00:20:46.715 "name": "BaseBdev2", 00:20:46.715 "uuid": "770cda95-7b7a-53b8-af0b-6a38f9f33800", 00:20:46.715 "is_configured": true, 00:20:46.715 "data_offset": 2048, 00:20:46.715 "data_size": 63488 00:20:46.715 }, 00:20:46.715 { 00:20:46.715 "name": "BaseBdev3", 00:20:46.715 "uuid": "0a49514e-827b-56a3-a49c-5ee395bb920c", 00:20:46.715 "is_configured": true, 00:20:46.715 "data_offset": 2048, 00:20:46.715 "data_size": 63488 00:20:46.715 }, 00:20:46.715 { 00:20:46.715 "name": "BaseBdev4", 00:20:46.715 "uuid": "5315644d-2595-5312-bfee-5d4c009f858c", 00:20:46.715 "is_configured": true, 00:20:46.715 "data_offset": 2048, 00:20:46.715 "data_size": 63488 00:20:46.715 } 00:20:46.715 ] 00:20:46.715 }' 00:20:46.715 18:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:46.715 18:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.281 18:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:47.281 18:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:47.281 18:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:47.281 18:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:47.281 18:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:47.281 18:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.281 18:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:47.281 18:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.281 18:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.281 18:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.281 18:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:47.281 "name": "raid_bdev1", 00:20:47.281 "uuid": "11f00a51-cb25-4aa2-953d-7bd21e8f7d1b", 00:20:47.281 "strip_size_kb": 64, 00:20:47.281 "state": "online", 00:20:47.281 "raid_level": "raid5f", 00:20:47.281 "superblock": true, 00:20:47.281 "num_base_bdevs": 4, 00:20:47.281 "num_base_bdevs_discovered": 3, 00:20:47.281 "num_base_bdevs_operational": 3, 00:20:47.281 "base_bdevs_list": [ 00:20:47.281 { 00:20:47.281 "name": null, 00:20:47.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:47.281 "is_configured": false, 00:20:47.281 "data_offset": 0, 00:20:47.281 "data_size": 63488 00:20:47.281 }, 00:20:47.281 { 00:20:47.281 "name": "BaseBdev2", 00:20:47.281 "uuid": "770cda95-7b7a-53b8-af0b-6a38f9f33800", 00:20:47.281 "is_configured": true, 00:20:47.281 "data_offset": 2048, 00:20:47.281 "data_size": 63488 00:20:47.281 }, 00:20:47.281 { 00:20:47.281 "name": "BaseBdev3", 00:20:47.281 "uuid": "0a49514e-827b-56a3-a49c-5ee395bb920c", 00:20:47.281 "is_configured": true, 00:20:47.281 "data_offset": 2048, 00:20:47.281 "data_size": 63488 00:20:47.281 }, 00:20:47.281 { 00:20:47.281 "name": "BaseBdev4", 00:20:47.281 "uuid": "5315644d-2595-5312-bfee-5d4c009f858c", 00:20:47.281 "is_configured": true, 00:20:47.281 "data_offset": 2048, 00:20:47.281 "data_size": 63488 00:20:47.281 } 00:20:47.281 ] 00:20:47.281 }' 00:20:47.281 18:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:47.281 18:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:47.281 18:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:47.281 18:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:47.281 18:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:47.281 18:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.281 18:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.281 18:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.281 18:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:47.281 18:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.281 18:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.281 [2024-12-06 18:19:12.771949] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:47.281 [2024-12-06 18:19:12.772012] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:47.281 [2024-12-06 18:19:12.772045] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:20:47.281 [2024-12-06 18:19:12.772060] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:47.281 [2024-12-06 18:19:12.772616] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:47.281 [2024-12-06 18:19:12.772656] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:47.281 [2024-12-06 18:19:12.772787] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:47.281 [2024-12-06 18:19:12.772809] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:47.281 [2024-12-06 18:19:12.772826] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:47.281 [2024-12-06 18:19:12.772838] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:47.281 BaseBdev1 00:20:47.281 18:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.281 18:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:48.654 18:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:48.654 18:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:48.654 18:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:48.654 18:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:48.654 18:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:48.654 18:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:48.654 18:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:48.654 18:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:48.654 18:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:48.654 18:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:48.654 18:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.655 18:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:48.655 18:19:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.655 18:19:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.655 18:19:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.655 18:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:48.655 "name": "raid_bdev1", 00:20:48.655 "uuid": "11f00a51-cb25-4aa2-953d-7bd21e8f7d1b", 00:20:48.655 "strip_size_kb": 64, 00:20:48.655 "state": "online", 00:20:48.655 "raid_level": "raid5f", 00:20:48.655 "superblock": true, 00:20:48.655 "num_base_bdevs": 4, 00:20:48.655 "num_base_bdevs_discovered": 3, 00:20:48.655 "num_base_bdevs_operational": 3, 00:20:48.655 "base_bdevs_list": [ 00:20:48.655 { 00:20:48.655 "name": null, 00:20:48.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:48.655 "is_configured": false, 00:20:48.655 "data_offset": 0, 00:20:48.655 "data_size": 63488 00:20:48.655 }, 00:20:48.655 { 00:20:48.655 "name": "BaseBdev2", 00:20:48.655 "uuid": "770cda95-7b7a-53b8-af0b-6a38f9f33800", 00:20:48.655 "is_configured": true, 00:20:48.655 "data_offset": 2048, 00:20:48.655 "data_size": 63488 00:20:48.655 }, 00:20:48.655 { 00:20:48.655 "name": "BaseBdev3", 00:20:48.655 "uuid": "0a49514e-827b-56a3-a49c-5ee395bb920c", 00:20:48.655 "is_configured": true, 00:20:48.655 "data_offset": 2048, 00:20:48.655 "data_size": 63488 00:20:48.655 }, 00:20:48.655 { 00:20:48.655 "name": "BaseBdev4", 00:20:48.655 "uuid": "5315644d-2595-5312-bfee-5d4c009f858c", 00:20:48.655 "is_configured": true, 00:20:48.655 "data_offset": 2048, 00:20:48.655 "data_size": 63488 00:20:48.655 } 00:20:48.655 ] 00:20:48.655 }' 00:20:48.655 18:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:48.655 18:19:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.913 18:19:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:48.913 18:19:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:48.913 18:19:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:48.913 18:19:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:48.913 18:19:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:48.913 18:19:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.913 18:19:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:48.913 18:19:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.913 18:19:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.913 18:19:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.913 18:19:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:48.913 "name": "raid_bdev1", 00:20:48.913 "uuid": "11f00a51-cb25-4aa2-953d-7bd21e8f7d1b", 00:20:48.913 "strip_size_kb": 64, 00:20:48.913 "state": "online", 00:20:48.913 "raid_level": "raid5f", 00:20:48.913 "superblock": true, 00:20:48.913 "num_base_bdevs": 4, 00:20:48.913 "num_base_bdevs_discovered": 3, 00:20:48.913 "num_base_bdevs_operational": 3, 00:20:48.913 "base_bdevs_list": [ 00:20:48.913 { 00:20:48.913 "name": null, 00:20:48.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:48.913 "is_configured": false, 00:20:48.913 "data_offset": 0, 00:20:48.913 "data_size": 63488 00:20:48.913 }, 00:20:48.913 { 00:20:48.913 "name": "BaseBdev2", 00:20:48.913 "uuid": "770cda95-7b7a-53b8-af0b-6a38f9f33800", 00:20:48.913 "is_configured": true, 00:20:48.913 "data_offset": 2048, 00:20:48.913 "data_size": 63488 00:20:48.913 }, 00:20:48.913 { 00:20:48.913 "name": "BaseBdev3", 00:20:48.913 "uuid": "0a49514e-827b-56a3-a49c-5ee395bb920c", 00:20:48.913 "is_configured": true, 00:20:48.913 "data_offset": 2048, 00:20:48.913 "data_size": 63488 00:20:48.913 }, 00:20:48.913 { 00:20:48.913 "name": "BaseBdev4", 00:20:48.913 "uuid": "5315644d-2595-5312-bfee-5d4c009f858c", 00:20:48.913 "is_configured": true, 00:20:48.913 "data_offset": 2048, 00:20:48.913 "data_size": 63488 00:20:48.913 } 00:20:48.913 ] 00:20:48.913 }' 00:20:48.913 18:19:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:48.913 18:19:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:48.913 18:19:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:49.170 18:19:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:49.170 18:19:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:49.170 18:19:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:20:49.170 18:19:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:49.170 18:19:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:49.170 18:19:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:49.170 18:19:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:49.171 18:19:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:49.171 18:19:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:49.171 18:19:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.171 18:19:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:49.171 [2024-12-06 18:19:14.488609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:49.171 [2024-12-06 18:19:14.488838] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:49.171 [2024-12-06 18:19:14.488863] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:49.171 request: 00:20:49.171 { 00:20:49.171 "base_bdev": "BaseBdev1", 00:20:49.171 "raid_bdev": "raid_bdev1", 00:20:49.171 "method": "bdev_raid_add_base_bdev", 00:20:49.171 "req_id": 1 00:20:49.171 } 00:20:49.171 Got JSON-RPC error response 00:20:49.171 response: 00:20:49.171 { 00:20:49.171 "code": -22, 00:20:49.171 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:49.171 } 00:20:49.171 18:19:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:49.171 18:19:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:20:49.171 18:19:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:49.171 18:19:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:49.171 18:19:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:49.171 18:19:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:50.115 18:19:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:50.115 18:19:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:50.115 18:19:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:50.115 18:19:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:50.115 18:19:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:50.115 18:19:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:50.115 18:19:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:50.115 18:19:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:50.115 18:19:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:50.115 18:19:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:50.115 18:19:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:50.115 18:19:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.115 18:19:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.115 18:19:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.115 18:19:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.115 18:19:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:50.115 "name": "raid_bdev1", 00:20:50.115 "uuid": "11f00a51-cb25-4aa2-953d-7bd21e8f7d1b", 00:20:50.115 "strip_size_kb": 64, 00:20:50.115 "state": "online", 00:20:50.115 "raid_level": "raid5f", 00:20:50.115 "superblock": true, 00:20:50.115 "num_base_bdevs": 4, 00:20:50.115 "num_base_bdevs_discovered": 3, 00:20:50.115 "num_base_bdevs_operational": 3, 00:20:50.115 "base_bdevs_list": [ 00:20:50.115 { 00:20:50.115 "name": null, 00:20:50.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.115 "is_configured": false, 00:20:50.115 "data_offset": 0, 00:20:50.115 "data_size": 63488 00:20:50.115 }, 00:20:50.115 { 00:20:50.115 "name": "BaseBdev2", 00:20:50.115 "uuid": "770cda95-7b7a-53b8-af0b-6a38f9f33800", 00:20:50.115 "is_configured": true, 00:20:50.115 "data_offset": 2048, 00:20:50.115 "data_size": 63488 00:20:50.115 }, 00:20:50.115 { 00:20:50.115 "name": "BaseBdev3", 00:20:50.115 "uuid": "0a49514e-827b-56a3-a49c-5ee395bb920c", 00:20:50.115 "is_configured": true, 00:20:50.115 "data_offset": 2048, 00:20:50.115 "data_size": 63488 00:20:50.115 }, 00:20:50.115 { 00:20:50.115 "name": "BaseBdev4", 00:20:50.115 "uuid": "5315644d-2595-5312-bfee-5d4c009f858c", 00:20:50.115 "is_configured": true, 00:20:50.115 "data_offset": 2048, 00:20:50.115 "data_size": 63488 00:20:50.115 } 00:20:50.115 ] 00:20:50.115 }' 00:20:50.115 18:19:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:50.115 18:19:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.700 18:19:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:50.700 18:19:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:50.700 18:19:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:50.700 18:19:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:50.700 18:19:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:50.700 18:19:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.700 18:19:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:50.700 18:19:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.700 18:19:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.700 18:19:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.700 18:19:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:50.700 "name": "raid_bdev1", 00:20:50.700 "uuid": "11f00a51-cb25-4aa2-953d-7bd21e8f7d1b", 00:20:50.700 "strip_size_kb": 64, 00:20:50.700 "state": "online", 00:20:50.700 "raid_level": "raid5f", 00:20:50.700 "superblock": true, 00:20:50.700 "num_base_bdevs": 4, 00:20:50.700 "num_base_bdevs_discovered": 3, 00:20:50.700 "num_base_bdevs_operational": 3, 00:20:50.700 "base_bdevs_list": [ 00:20:50.700 { 00:20:50.700 "name": null, 00:20:50.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.700 "is_configured": false, 00:20:50.700 "data_offset": 0, 00:20:50.700 "data_size": 63488 00:20:50.700 }, 00:20:50.700 { 00:20:50.700 "name": "BaseBdev2", 00:20:50.700 "uuid": "770cda95-7b7a-53b8-af0b-6a38f9f33800", 00:20:50.700 "is_configured": true, 00:20:50.700 "data_offset": 2048, 00:20:50.700 "data_size": 63488 00:20:50.700 }, 00:20:50.700 { 00:20:50.700 "name": "BaseBdev3", 00:20:50.700 "uuid": "0a49514e-827b-56a3-a49c-5ee395bb920c", 00:20:50.700 "is_configured": true, 00:20:50.700 "data_offset": 2048, 00:20:50.700 "data_size": 63488 00:20:50.700 }, 00:20:50.700 { 00:20:50.700 "name": "BaseBdev4", 00:20:50.700 "uuid": "5315644d-2595-5312-bfee-5d4c009f858c", 00:20:50.700 "is_configured": true, 00:20:50.700 "data_offset": 2048, 00:20:50.700 "data_size": 63488 00:20:50.700 } 00:20:50.700 ] 00:20:50.700 }' 00:20:50.700 18:19:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:50.700 18:19:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:50.700 18:19:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:50.700 18:19:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:50.700 18:19:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85570 00:20:50.700 18:19:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85570 ']' 00:20:50.700 18:19:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 85570 00:20:50.700 18:19:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:20:50.700 18:19:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:50.700 18:19:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85570 00:20:50.700 killing process with pid 85570 00:20:50.700 Received shutdown signal, test time was about 60.000000 seconds 00:20:50.700 00:20:50.700 Latency(us) 00:20:50.700 [2024-12-06T18:19:16.220Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.700 [2024-12-06T18:19:16.220Z] =================================================================================================================== 00:20:50.700 [2024-12-06T18:19:16.220Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:50.700 18:19:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:50.700 18:19:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:50.700 18:19:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85570' 00:20:50.700 18:19:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 85570 00:20:50.700 [2024-12-06 18:19:16.183636] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:50.700 18:19:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 85570 00:20:50.700 [2024-12-06 18:19:16.183813] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:50.700 [2024-12-06 18:19:16.183913] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:50.700 [2024-12-06 18:19:16.183935] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:51.293 [2024-12-06 18:19:16.613818] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:52.247 18:19:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:20:52.247 00:20:52.247 real 0m28.737s 00:20:52.247 user 0m37.443s 00:20:52.247 sys 0m2.915s 00:20:52.247 18:19:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:52.247 ************************************ 00:20:52.247 END TEST raid5f_rebuild_test_sb 00:20:52.247 ************************************ 00:20:52.247 18:19:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:52.247 18:19:17 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:20:52.247 18:19:17 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:20:52.247 18:19:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:52.247 18:19:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:52.247 18:19:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:52.247 ************************************ 00:20:52.247 START TEST raid_state_function_test_sb_4k 00:20:52.247 ************************************ 00:20:52.247 18:19:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:20:52.247 18:19:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:20:52.247 18:19:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:20:52.247 18:19:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:20:52.247 18:19:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:52.247 18:19:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:52.247 18:19:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:52.247 18:19:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:52.248 18:19:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:52.248 18:19:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:52.248 18:19:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:52.248 18:19:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:52.248 18:19:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:52.248 18:19:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:52.248 18:19:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:52.248 18:19:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:52.248 18:19:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:52.248 18:19:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:52.248 18:19:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:52.248 18:19:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:20:52.248 18:19:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:20:52.248 18:19:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:20:52.248 18:19:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:20:52.248 18:19:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:52.248 18:19:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86392 00:20:52.248 18:19:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86392' 00:20:52.248 Process raid pid: 86392 00:20:52.248 18:19:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86392 00:20:52.248 18:19:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86392 ']' 00:20:52.248 18:19:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:52.248 18:19:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:52.248 18:19:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:52.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:52.248 18:19:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:52.248 18:19:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:52.507 [2024-12-06 18:19:17.789171] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:20:52.507 [2024-12-06 18:19:17.789533] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:52.507 [2024-12-06 18:19:17.971033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.766 [2024-12-06 18:19:18.113342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:53.025 [2024-12-06 18:19:18.325466] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:53.025 [2024-12-06 18:19:18.325655] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:53.593 18:19:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:53.593 18:19:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:20:53.593 18:19:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:53.593 18:19:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.593 18:19:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:53.593 [2024-12-06 18:19:18.877001] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:53.593 [2024-12-06 18:19:18.877071] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:53.593 [2024-12-06 18:19:18.877090] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:53.593 [2024-12-06 18:19:18.877107] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:53.593 18:19:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.593 18:19:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:53.593 18:19:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:53.593 18:19:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:53.593 18:19:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:53.593 18:19:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:53.593 18:19:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:53.593 18:19:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:53.594 18:19:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:53.594 18:19:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:53.594 18:19:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:53.594 18:19:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:53.594 18:19:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.594 18:19:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:53.594 18:19:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:53.594 18:19:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.594 18:19:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:53.594 "name": "Existed_Raid", 00:20:53.594 "uuid": "05612975-2d2a-42c7-8b06-407dc18c7478", 00:20:53.594 "strip_size_kb": 0, 00:20:53.594 "state": "configuring", 00:20:53.594 "raid_level": "raid1", 00:20:53.594 "superblock": true, 00:20:53.594 "num_base_bdevs": 2, 00:20:53.594 "num_base_bdevs_discovered": 0, 00:20:53.594 "num_base_bdevs_operational": 2, 00:20:53.594 "base_bdevs_list": [ 00:20:53.594 { 00:20:53.594 "name": "BaseBdev1", 00:20:53.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.594 "is_configured": false, 00:20:53.594 "data_offset": 0, 00:20:53.594 "data_size": 0 00:20:53.594 }, 00:20:53.594 { 00:20:53.594 "name": "BaseBdev2", 00:20:53.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.594 "is_configured": false, 00:20:53.594 "data_offset": 0, 00:20:53.594 "data_size": 0 00:20:53.594 } 00:20:53.594 ] 00:20:53.594 }' 00:20:53.594 18:19:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:53.594 18:19:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:53.853 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:53.853 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.853 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:53.853 [2024-12-06 18:19:19.365085] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:53.853 [2024-12-06 18:19:19.365129] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:53.853 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.853 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:53.853 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.853 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:54.112 [2024-12-06 18:19:19.377065] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:54.112 [2024-12-06 18:19:19.377236] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:54.112 [2024-12-06 18:19:19.377355] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:54.112 [2024-12-06 18:19:19.377419] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:54.112 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.112 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:20:54.112 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.112 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:54.113 [2024-12-06 18:19:19.426977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:54.113 BaseBdev1 00:20:54.113 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.113 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:54.113 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:54.113 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:54.113 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:20:54.113 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:54.113 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:54.113 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:54.113 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.113 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:54.113 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.113 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:54.113 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.113 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:54.113 [ 00:20:54.113 { 00:20:54.113 "name": "BaseBdev1", 00:20:54.113 "aliases": [ 00:20:54.113 "2272dc53-eab7-4227-9e89-67648aa99bb3" 00:20:54.113 ], 00:20:54.113 "product_name": "Malloc disk", 00:20:54.113 "block_size": 4096, 00:20:54.113 "num_blocks": 8192, 00:20:54.113 "uuid": "2272dc53-eab7-4227-9e89-67648aa99bb3", 00:20:54.113 "assigned_rate_limits": { 00:20:54.113 "rw_ios_per_sec": 0, 00:20:54.113 "rw_mbytes_per_sec": 0, 00:20:54.113 "r_mbytes_per_sec": 0, 00:20:54.113 "w_mbytes_per_sec": 0 00:20:54.113 }, 00:20:54.113 "claimed": true, 00:20:54.113 "claim_type": "exclusive_write", 00:20:54.113 "zoned": false, 00:20:54.113 "supported_io_types": { 00:20:54.113 "read": true, 00:20:54.113 "write": true, 00:20:54.113 "unmap": true, 00:20:54.113 "flush": true, 00:20:54.113 "reset": true, 00:20:54.113 "nvme_admin": false, 00:20:54.113 "nvme_io": false, 00:20:54.113 "nvme_io_md": false, 00:20:54.113 "write_zeroes": true, 00:20:54.113 "zcopy": true, 00:20:54.113 "get_zone_info": false, 00:20:54.113 "zone_management": false, 00:20:54.113 "zone_append": false, 00:20:54.113 "compare": false, 00:20:54.113 "compare_and_write": false, 00:20:54.113 "abort": true, 00:20:54.113 "seek_hole": false, 00:20:54.113 "seek_data": false, 00:20:54.113 "copy": true, 00:20:54.113 "nvme_iov_md": false 00:20:54.113 }, 00:20:54.113 "memory_domains": [ 00:20:54.113 { 00:20:54.113 "dma_device_id": "system", 00:20:54.113 "dma_device_type": 1 00:20:54.113 }, 00:20:54.113 { 00:20:54.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:54.113 "dma_device_type": 2 00:20:54.113 } 00:20:54.113 ], 00:20:54.113 "driver_specific": {} 00:20:54.113 } 00:20:54.113 ] 00:20:54.113 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.113 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:20:54.113 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:54.113 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:54.113 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:54.113 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:54.113 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:54.113 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:54.113 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:54.113 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:54.113 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:54.113 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:54.113 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:54.113 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.113 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:54.113 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:54.113 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.113 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:54.113 "name": "Existed_Raid", 00:20:54.113 "uuid": "a32fdf9a-70fa-4f4f-b1e1-2bf35969fe91", 00:20:54.113 "strip_size_kb": 0, 00:20:54.113 "state": "configuring", 00:20:54.113 "raid_level": "raid1", 00:20:54.113 "superblock": true, 00:20:54.113 "num_base_bdevs": 2, 00:20:54.113 "num_base_bdevs_discovered": 1, 00:20:54.113 "num_base_bdevs_operational": 2, 00:20:54.113 "base_bdevs_list": [ 00:20:54.113 { 00:20:54.113 "name": "BaseBdev1", 00:20:54.113 "uuid": "2272dc53-eab7-4227-9e89-67648aa99bb3", 00:20:54.113 "is_configured": true, 00:20:54.113 "data_offset": 256, 00:20:54.113 "data_size": 7936 00:20:54.113 }, 00:20:54.113 { 00:20:54.113 "name": "BaseBdev2", 00:20:54.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:54.113 "is_configured": false, 00:20:54.113 "data_offset": 0, 00:20:54.113 "data_size": 0 00:20:54.113 } 00:20:54.113 ] 00:20:54.113 }' 00:20:54.113 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:54.113 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:54.680 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:54.680 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.680 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:54.680 [2024-12-06 18:19:19.947169] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:54.680 [2024-12-06 18:19:19.947351] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:54.680 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.680 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:54.680 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.680 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:54.680 [2024-12-06 18:19:19.955208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:54.680 [2024-12-06 18:19:19.957566] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:54.680 [2024-12-06 18:19:19.957615] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:54.680 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.680 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:54.680 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:54.680 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:54.680 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:54.680 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:54.680 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:54.680 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:54.680 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:54.680 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:54.680 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:54.680 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:54.681 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:54.681 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:54.681 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.681 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:54.681 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:54.681 18:19:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.681 18:19:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:54.681 "name": "Existed_Raid", 00:20:54.681 "uuid": "e8cf815c-ce66-4d5d-81df-612ebcd08911", 00:20:54.681 "strip_size_kb": 0, 00:20:54.681 "state": "configuring", 00:20:54.681 "raid_level": "raid1", 00:20:54.681 "superblock": true, 00:20:54.681 "num_base_bdevs": 2, 00:20:54.681 "num_base_bdevs_discovered": 1, 00:20:54.681 "num_base_bdevs_operational": 2, 00:20:54.681 "base_bdevs_list": [ 00:20:54.681 { 00:20:54.681 "name": "BaseBdev1", 00:20:54.681 "uuid": "2272dc53-eab7-4227-9e89-67648aa99bb3", 00:20:54.681 "is_configured": true, 00:20:54.681 "data_offset": 256, 00:20:54.681 "data_size": 7936 00:20:54.681 }, 00:20:54.681 { 00:20:54.681 "name": "BaseBdev2", 00:20:54.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:54.681 "is_configured": false, 00:20:54.681 "data_offset": 0, 00:20:54.681 "data_size": 0 00:20:54.681 } 00:20:54.681 ] 00:20:54.681 }' 00:20:54.681 18:19:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:54.681 18:19:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:54.940 18:19:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:20:54.940 18:19:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.940 18:19:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:55.200 [2024-12-06 18:19:20.499005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:55.200 [2024-12-06 18:19:20.499506] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:55.200 [2024-12-06 18:19:20.499531] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:55.200 BaseBdev2 00:20:55.200 [2024-12-06 18:19:20.499872] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:55.200 [2024-12-06 18:19:20.500086] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:55.200 [2024-12-06 18:19:20.500110] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:55.200 [2024-12-06 18:19:20.500282] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:55.200 18:19:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.200 18:19:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:55.200 18:19:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:55.200 18:19:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:55.200 18:19:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:20:55.200 18:19:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:55.200 18:19:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:55.200 18:19:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:55.200 18:19:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.200 18:19:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:55.200 18:19:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.200 18:19:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:55.200 18:19:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.200 18:19:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:55.200 [ 00:20:55.200 { 00:20:55.200 "name": "BaseBdev2", 00:20:55.200 "aliases": [ 00:20:55.200 "035265c9-2bd8-4228-a6e8-4b7ffe57baf8" 00:20:55.200 ], 00:20:55.200 "product_name": "Malloc disk", 00:20:55.200 "block_size": 4096, 00:20:55.200 "num_blocks": 8192, 00:20:55.200 "uuid": "035265c9-2bd8-4228-a6e8-4b7ffe57baf8", 00:20:55.200 "assigned_rate_limits": { 00:20:55.200 "rw_ios_per_sec": 0, 00:20:55.200 "rw_mbytes_per_sec": 0, 00:20:55.200 "r_mbytes_per_sec": 0, 00:20:55.201 "w_mbytes_per_sec": 0 00:20:55.201 }, 00:20:55.201 "claimed": true, 00:20:55.201 "claim_type": "exclusive_write", 00:20:55.201 "zoned": false, 00:20:55.201 "supported_io_types": { 00:20:55.201 "read": true, 00:20:55.201 "write": true, 00:20:55.201 "unmap": true, 00:20:55.201 "flush": true, 00:20:55.201 "reset": true, 00:20:55.201 "nvme_admin": false, 00:20:55.201 "nvme_io": false, 00:20:55.201 "nvme_io_md": false, 00:20:55.201 "write_zeroes": true, 00:20:55.201 "zcopy": true, 00:20:55.201 "get_zone_info": false, 00:20:55.201 "zone_management": false, 00:20:55.201 "zone_append": false, 00:20:55.201 "compare": false, 00:20:55.201 "compare_and_write": false, 00:20:55.201 "abort": true, 00:20:55.201 "seek_hole": false, 00:20:55.201 "seek_data": false, 00:20:55.201 "copy": true, 00:20:55.201 "nvme_iov_md": false 00:20:55.201 }, 00:20:55.201 "memory_domains": [ 00:20:55.201 { 00:20:55.201 "dma_device_id": "system", 00:20:55.201 "dma_device_type": 1 00:20:55.201 }, 00:20:55.201 { 00:20:55.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:55.201 "dma_device_type": 2 00:20:55.201 } 00:20:55.201 ], 00:20:55.201 "driver_specific": {} 00:20:55.201 } 00:20:55.201 ] 00:20:55.201 18:19:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.201 18:19:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:20:55.201 18:19:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:55.201 18:19:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:55.201 18:19:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:55.201 18:19:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:55.201 18:19:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:55.201 18:19:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:55.201 18:19:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:55.201 18:19:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:55.201 18:19:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:55.201 18:19:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:55.201 18:19:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:55.201 18:19:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:55.201 18:19:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.201 18:19:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.201 18:19:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:55.201 18:19:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:55.201 18:19:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.201 18:19:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:55.201 "name": "Existed_Raid", 00:20:55.201 "uuid": "e8cf815c-ce66-4d5d-81df-612ebcd08911", 00:20:55.201 "strip_size_kb": 0, 00:20:55.201 "state": "online", 00:20:55.201 "raid_level": "raid1", 00:20:55.201 "superblock": true, 00:20:55.201 "num_base_bdevs": 2, 00:20:55.201 "num_base_bdevs_discovered": 2, 00:20:55.201 "num_base_bdevs_operational": 2, 00:20:55.201 "base_bdevs_list": [ 00:20:55.201 { 00:20:55.201 "name": "BaseBdev1", 00:20:55.201 "uuid": "2272dc53-eab7-4227-9e89-67648aa99bb3", 00:20:55.201 "is_configured": true, 00:20:55.201 "data_offset": 256, 00:20:55.201 "data_size": 7936 00:20:55.201 }, 00:20:55.201 { 00:20:55.201 "name": "BaseBdev2", 00:20:55.201 "uuid": "035265c9-2bd8-4228-a6e8-4b7ffe57baf8", 00:20:55.201 "is_configured": true, 00:20:55.201 "data_offset": 256, 00:20:55.201 "data_size": 7936 00:20:55.201 } 00:20:55.201 ] 00:20:55.201 }' 00:20:55.201 18:19:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:55.201 18:19:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:55.770 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:55.770 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:55.770 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:55.770 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:55.770 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:20:55.770 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:55.770 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:55.770 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.770 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:55.770 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:55.770 [2024-12-06 18:19:21.059619] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:55.770 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.770 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:55.770 "name": "Existed_Raid", 00:20:55.770 "aliases": [ 00:20:55.770 "e8cf815c-ce66-4d5d-81df-612ebcd08911" 00:20:55.770 ], 00:20:55.770 "product_name": "Raid Volume", 00:20:55.770 "block_size": 4096, 00:20:55.770 "num_blocks": 7936, 00:20:55.770 "uuid": "e8cf815c-ce66-4d5d-81df-612ebcd08911", 00:20:55.770 "assigned_rate_limits": { 00:20:55.770 "rw_ios_per_sec": 0, 00:20:55.770 "rw_mbytes_per_sec": 0, 00:20:55.770 "r_mbytes_per_sec": 0, 00:20:55.770 "w_mbytes_per_sec": 0 00:20:55.770 }, 00:20:55.770 "claimed": false, 00:20:55.770 "zoned": false, 00:20:55.770 "supported_io_types": { 00:20:55.770 "read": true, 00:20:55.770 "write": true, 00:20:55.770 "unmap": false, 00:20:55.770 "flush": false, 00:20:55.770 "reset": true, 00:20:55.770 "nvme_admin": false, 00:20:55.770 "nvme_io": false, 00:20:55.770 "nvme_io_md": false, 00:20:55.770 "write_zeroes": true, 00:20:55.770 "zcopy": false, 00:20:55.770 "get_zone_info": false, 00:20:55.770 "zone_management": false, 00:20:55.770 "zone_append": false, 00:20:55.770 "compare": false, 00:20:55.770 "compare_and_write": false, 00:20:55.770 "abort": false, 00:20:55.770 "seek_hole": false, 00:20:55.770 "seek_data": false, 00:20:55.770 "copy": false, 00:20:55.770 "nvme_iov_md": false 00:20:55.770 }, 00:20:55.770 "memory_domains": [ 00:20:55.770 { 00:20:55.770 "dma_device_id": "system", 00:20:55.770 "dma_device_type": 1 00:20:55.770 }, 00:20:55.770 { 00:20:55.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:55.770 "dma_device_type": 2 00:20:55.770 }, 00:20:55.770 { 00:20:55.770 "dma_device_id": "system", 00:20:55.770 "dma_device_type": 1 00:20:55.770 }, 00:20:55.770 { 00:20:55.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:55.770 "dma_device_type": 2 00:20:55.770 } 00:20:55.770 ], 00:20:55.770 "driver_specific": { 00:20:55.770 "raid": { 00:20:55.770 "uuid": "e8cf815c-ce66-4d5d-81df-612ebcd08911", 00:20:55.770 "strip_size_kb": 0, 00:20:55.770 "state": "online", 00:20:55.770 "raid_level": "raid1", 00:20:55.770 "superblock": true, 00:20:55.770 "num_base_bdevs": 2, 00:20:55.770 "num_base_bdevs_discovered": 2, 00:20:55.770 "num_base_bdevs_operational": 2, 00:20:55.770 "base_bdevs_list": [ 00:20:55.770 { 00:20:55.770 "name": "BaseBdev1", 00:20:55.770 "uuid": "2272dc53-eab7-4227-9e89-67648aa99bb3", 00:20:55.770 "is_configured": true, 00:20:55.770 "data_offset": 256, 00:20:55.770 "data_size": 7936 00:20:55.770 }, 00:20:55.770 { 00:20:55.770 "name": "BaseBdev2", 00:20:55.770 "uuid": "035265c9-2bd8-4228-a6e8-4b7ffe57baf8", 00:20:55.770 "is_configured": true, 00:20:55.770 "data_offset": 256, 00:20:55.770 "data_size": 7936 00:20:55.770 } 00:20:55.770 ] 00:20:55.770 } 00:20:55.770 } 00:20:55.770 }' 00:20:55.770 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:55.770 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:55.770 BaseBdev2' 00:20:55.770 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:55.770 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:20:55.770 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:55.770 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:55.770 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:55.770 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.770 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:55.770 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.770 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:55.770 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:55.770 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:55.770 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:55.770 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:55.770 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.770 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:56.030 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.030 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:56.030 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:56.030 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:56.030 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.030 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:56.030 [2024-12-06 18:19:21.323338] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:56.030 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.030 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:56.030 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:20:56.030 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:56.030 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:20:56.030 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:56.030 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:20:56.030 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:56.030 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:56.030 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:56.030 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:56.030 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:56.030 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:56.030 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:56.030 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:56.030 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:56.030 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:56.030 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.030 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:56.030 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:56.030 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.030 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:56.030 "name": "Existed_Raid", 00:20:56.030 "uuid": "e8cf815c-ce66-4d5d-81df-612ebcd08911", 00:20:56.030 "strip_size_kb": 0, 00:20:56.030 "state": "online", 00:20:56.030 "raid_level": "raid1", 00:20:56.030 "superblock": true, 00:20:56.030 "num_base_bdevs": 2, 00:20:56.030 "num_base_bdevs_discovered": 1, 00:20:56.030 "num_base_bdevs_operational": 1, 00:20:56.030 "base_bdevs_list": [ 00:20:56.030 { 00:20:56.030 "name": null, 00:20:56.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:56.030 "is_configured": false, 00:20:56.030 "data_offset": 0, 00:20:56.030 "data_size": 7936 00:20:56.030 }, 00:20:56.030 { 00:20:56.030 "name": "BaseBdev2", 00:20:56.030 "uuid": "035265c9-2bd8-4228-a6e8-4b7ffe57baf8", 00:20:56.030 "is_configured": true, 00:20:56.030 "data_offset": 256, 00:20:56.030 "data_size": 7936 00:20:56.030 } 00:20:56.030 ] 00:20:56.030 }' 00:20:56.030 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:56.030 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:56.598 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:56.598 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:56.598 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:56.598 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.598 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:56.598 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:56.598 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.598 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:56.598 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:56.598 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:56.598 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.598 18:19:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:56.598 [2024-12-06 18:19:21.996334] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:56.598 [2024-12-06 18:19:21.996494] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:56.598 [2024-12-06 18:19:22.085027] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:56.598 [2024-12-06 18:19:22.085302] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:56.598 [2024-12-06 18:19:22.085452] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:56.598 18:19:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.598 18:19:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:56.598 18:19:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:56.598 18:19:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:56.598 18:19:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:56.598 18:19:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.598 18:19:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:56.598 18:19:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.857 18:19:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:56.857 18:19:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:56.857 18:19:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:20:56.857 18:19:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86392 00:20:56.857 18:19:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86392 ']' 00:20:56.857 18:19:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86392 00:20:56.857 18:19:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:20:56.857 18:19:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:56.857 18:19:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86392 00:20:56.857 killing process with pid 86392 00:20:56.857 18:19:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:56.857 18:19:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:56.857 18:19:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86392' 00:20:56.857 18:19:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86392 00:20:56.857 [2024-12-06 18:19:22.178781] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:56.857 18:19:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86392 00:20:56.857 [2024-12-06 18:19:22.194215] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:57.793 18:19:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:20:57.793 00:20:57.793 real 0m5.606s 00:20:57.793 user 0m8.427s 00:20:57.793 sys 0m0.807s 00:20:57.793 ************************************ 00:20:57.793 END TEST raid_state_function_test_sb_4k 00:20:57.793 ************************************ 00:20:57.793 18:19:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:57.793 18:19:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:58.051 18:19:23 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:20:58.051 18:19:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:58.051 18:19:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:58.051 18:19:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:58.051 ************************************ 00:20:58.051 START TEST raid_superblock_test_4k 00:20:58.051 ************************************ 00:20:58.051 18:19:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:20:58.051 18:19:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:20:58.051 18:19:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:20:58.051 18:19:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:58.051 18:19:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:58.051 18:19:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:58.051 18:19:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:58.051 18:19:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:58.051 18:19:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:58.051 18:19:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:58.051 18:19:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:58.051 18:19:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:58.051 18:19:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:58.051 18:19:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:58.051 18:19:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:20:58.051 18:19:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:20:58.051 18:19:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86644 00:20:58.051 18:19:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:58.051 18:19:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86644 00:20:58.051 18:19:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86644 ']' 00:20:58.051 18:19:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:58.051 18:19:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:58.051 18:19:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:58.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:58.051 18:19:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:58.051 18:19:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:58.051 [2024-12-06 18:19:23.463346] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:20:58.052 [2024-12-06 18:19:23.464102] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86644 ] 00:20:58.310 [2024-12-06 18:19:23.651124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.310 [2024-12-06 18:19:23.784743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.569 [2024-12-06 18:19:23.988091] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:58.569 [2024-12-06 18:19:23.988161] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:59.137 18:19:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:59.137 18:19:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:20:59.137 18:19:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:59.137 18:19:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:59.137 18:19:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:59.137 18:19:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:59.137 18:19:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:59.137 18:19:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:59.137 18:19:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:59.137 18:19:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:59.137 18:19:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:20:59.137 18:19:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.137 18:19:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:59.137 malloc1 00:20:59.137 18:19:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.137 18:19:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:59.137 18:19:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.137 18:19:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:59.137 [2024-12-06 18:19:24.483668] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:59.137 [2024-12-06 18:19:24.483741] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:59.137 [2024-12-06 18:19:24.483788] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:59.137 [2024-12-06 18:19:24.483808] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:59.137 [2024-12-06 18:19:24.486523] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:59.137 [2024-12-06 18:19:24.486570] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:59.137 pt1 00:20:59.137 18:19:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.137 18:19:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:59.137 18:19:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:59.137 18:19:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:59.137 18:19:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:59.137 18:19:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:59.137 18:19:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:59.137 18:19:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:59.137 18:19:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:59.137 18:19:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:20:59.137 18:19:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.137 18:19:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:59.137 malloc2 00:20:59.137 18:19:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.137 18:19:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:59.137 18:19:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.137 18:19:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:59.137 [2024-12-06 18:19:24.540230] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:59.137 [2024-12-06 18:19:24.540300] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:59.137 [2024-12-06 18:19:24.540338] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:59.137 [2024-12-06 18:19:24.540353] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:59.137 [2024-12-06 18:19:24.543141] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:59.137 [2024-12-06 18:19:24.543320] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:59.137 pt2 00:20:59.137 18:19:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.137 18:19:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:59.137 18:19:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:59.137 18:19:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:20:59.137 18:19:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.137 18:19:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:59.137 [2024-12-06 18:19:24.552277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:59.137 [2024-12-06 18:19:24.554811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:59.137 [2024-12-06 18:19:24.555165] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:59.138 [2024-12-06 18:19:24.555298] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:59.138 [2024-12-06 18:19:24.555671] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:59.138 [2024-12-06 18:19:24.556021] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:59.138 [2024-12-06 18:19:24.556155] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:59.138 [2024-12-06 18:19:24.556577] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:59.138 18:19:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.138 18:19:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:59.138 18:19:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:59.138 18:19:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:59.138 18:19:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:59.138 18:19:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:59.138 18:19:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:59.138 18:19:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:59.138 18:19:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:59.138 18:19:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:59.138 18:19:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:59.138 18:19:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:59.138 18:19:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.138 18:19:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.138 18:19:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:59.138 18:19:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.138 18:19:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:59.138 "name": "raid_bdev1", 00:20:59.138 "uuid": "c9833ab4-ca57-4a7e-8d92-a0b7c0ea27b2", 00:20:59.138 "strip_size_kb": 0, 00:20:59.138 "state": "online", 00:20:59.138 "raid_level": "raid1", 00:20:59.138 "superblock": true, 00:20:59.138 "num_base_bdevs": 2, 00:20:59.138 "num_base_bdevs_discovered": 2, 00:20:59.138 "num_base_bdevs_operational": 2, 00:20:59.138 "base_bdevs_list": [ 00:20:59.138 { 00:20:59.138 "name": "pt1", 00:20:59.138 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:59.138 "is_configured": true, 00:20:59.138 "data_offset": 256, 00:20:59.138 "data_size": 7936 00:20:59.138 }, 00:20:59.138 { 00:20:59.138 "name": "pt2", 00:20:59.138 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:59.138 "is_configured": true, 00:20:59.138 "data_offset": 256, 00:20:59.138 "data_size": 7936 00:20:59.138 } 00:20:59.138 ] 00:20:59.138 }' 00:20:59.138 18:19:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:59.138 18:19:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:59.704 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:59.704 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:59.704 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:59.704 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:59.704 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:20:59.704 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:59.704 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:59.704 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:59.704 18:19:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.704 18:19:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:59.704 [2024-12-06 18:19:25.113060] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:59.705 18:19:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.705 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:59.705 "name": "raid_bdev1", 00:20:59.705 "aliases": [ 00:20:59.705 "c9833ab4-ca57-4a7e-8d92-a0b7c0ea27b2" 00:20:59.705 ], 00:20:59.705 "product_name": "Raid Volume", 00:20:59.705 "block_size": 4096, 00:20:59.705 "num_blocks": 7936, 00:20:59.705 "uuid": "c9833ab4-ca57-4a7e-8d92-a0b7c0ea27b2", 00:20:59.705 "assigned_rate_limits": { 00:20:59.705 "rw_ios_per_sec": 0, 00:20:59.705 "rw_mbytes_per_sec": 0, 00:20:59.705 "r_mbytes_per_sec": 0, 00:20:59.705 "w_mbytes_per_sec": 0 00:20:59.705 }, 00:20:59.705 "claimed": false, 00:20:59.705 "zoned": false, 00:20:59.705 "supported_io_types": { 00:20:59.705 "read": true, 00:20:59.705 "write": true, 00:20:59.705 "unmap": false, 00:20:59.705 "flush": false, 00:20:59.705 "reset": true, 00:20:59.705 "nvme_admin": false, 00:20:59.705 "nvme_io": false, 00:20:59.705 "nvme_io_md": false, 00:20:59.705 "write_zeroes": true, 00:20:59.705 "zcopy": false, 00:20:59.705 "get_zone_info": false, 00:20:59.705 "zone_management": false, 00:20:59.705 "zone_append": false, 00:20:59.705 "compare": false, 00:20:59.705 "compare_and_write": false, 00:20:59.705 "abort": false, 00:20:59.705 "seek_hole": false, 00:20:59.705 "seek_data": false, 00:20:59.705 "copy": false, 00:20:59.705 "nvme_iov_md": false 00:20:59.705 }, 00:20:59.705 "memory_domains": [ 00:20:59.705 { 00:20:59.705 "dma_device_id": "system", 00:20:59.705 "dma_device_type": 1 00:20:59.705 }, 00:20:59.705 { 00:20:59.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:59.705 "dma_device_type": 2 00:20:59.705 }, 00:20:59.705 { 00:20:59.705 "dma_device_id": "system", 00:20:59.705 "dma_device_type": 1 00:20:59.705 }, 00:20:59.705 { 00:20:59.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:59.705 "dma_device_type": 2 00:20:59.705 } 00:20:59.705 ], 00:20:59.705 "driver_specific": { 00:20:59.705 "raid": { 00:20:59.705 "uuid": "c9833ab4-ca57-4a7e-8d92-a0b7c0ea27b2", 00:20:59.705 "strip_size_kb": 0, 00:20:59.705 "state": "online", 00:20:59.705 "raid_level": "raid1", 00:20:59.705 "superblock": true, 00:20:59.705 "num_base_bdevs": 2, 00:20:59.705 "num_base_bdevs_discovered": 2, 00:20:59.705 "num_base_bdevs_operational": 2, 00:20:59.705 "base_bdevs_list": [ 00:20:59.705 { 00:20:59.705 "name": "pt1", 00:20:59.705 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:59.705 "is_configured": true, 00:20:59.705 "data_offset": 256, 00:20:59.705 "data_size": 7936 00:20:59.705 }, 00:20:59.705 { 00:20:59.705 "name": "pt2", 00:20:59.705 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:59.705 "is_configured": true, 00:20:59.705 "data_offset": 256, 00:20:59.705 "data_size": 7936 00:20:59.705 } 00:20:59.705 ] 00:20:59.705 } 00:20:59.705 } 00:20:59.705 }' 00:20:59.705 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:59.705 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:59.705 pt2' 00:20:59.705 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:59.963 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:20:59.963 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:59.963 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:59.963 18:19:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.963 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:59.963 18:19:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:59.963 18:19:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.963 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:59.963 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:59.963 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:59.963 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:59.963 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:59.963 18:19:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.963 18:19:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:59.963 18:19:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.963 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:59.963 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:59.963 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:59.963 18:19:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.963 18:19:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:59.964 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:59.964 [2024-12-06 18:19:25.381102] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:59.964 18:19:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.964 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c9833ab4-ca57-4a7e-8d92-a0b7c0ea27b2 00:20:59.964 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z c9833ab4-ca57-4a7e-8d92-a0b7c0ea27b2 ']' 00:20:59.964 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:59.964 18:19:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.964 18:19:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:59.964 [2024-12-06 18:19:25.436758] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:59.964 [2024-12-06 18:19:25.436813] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:59.964 [2024-12-06 18:19:25.436937] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:59.964 [2024-12-06 18:19:25.437039] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:59.964 [2024-12-06 18:19:25.437064] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:59.964 18:19:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.964 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:59.964 18:19:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.964 18:19:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:59.964 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:59.964 18:19:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.223 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:21:00.223 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:21:00.223 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:00.223 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:21:00.223 18:19:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.223 18:19:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:00.223 18:19:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.223 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:00.223 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:21:00.223 18:19:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.223 18:19:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:00.223 18:19:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.223 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:21:00.223 18:19:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.223 18:19:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:00.223 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:00.223 18:19:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.223 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:21:00.223 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:00.223 18:19:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:21:00.223 18:19:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:00.223 18:19:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:00.223 18:19:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:00.223 18:19:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:00.223 18:19:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:00.223 18:19:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:00.223 18:19:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.223 18:19:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:00.223 [2024-12-06 18:19:25.576870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:00.223 [2024-12-06 18:19:25.579628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:00.223 [2024-12-06 18:19:25.579877] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:00.223 [2024-12-06 18:19:25.579987] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:00.223 [2024-12-06 18:19:25.580020] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:00.223 [2024-12-06 18:19:25.580038] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:21:00.223 request: 00:21:00.223 { 00:21:00.223 "name": "raid_bdev1", 00:21:00.223 "raid_level": "raid1", 00:21:00.223 "base_bdevs": [ 00:21:00.223 "malloc1", 00:21:00.223 "malloc2" 00:21:00.223 ], 00:21:00.223 "superblock": false, 00:21:00.223 "method": "bdev_raid_create", 00:21:00.223 "req_id": 1 00:21:00.223 } 00:21:00.223 Got JSON-RPC error response 00:21:00.223 response: 00:21:00.223 { 00:21:00.223 "code": -17, 00:21:00.223 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:00.223 } 00:21:00.223 18:19:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:00.223 18:19:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:21:00.223 18:19:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:00.223 18:19:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:00.223 18:19:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:00.223 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.223 18:19:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.223 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:21:00.223 18:19:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:00.223 18:19:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.223 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:21:00.223 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:21:00.223 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:00.223 18:19:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.223 18:19:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:00.223 [2024-12-06 18:19:25.640934] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:00.223 [2024-12-06 18:19:25.641005] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:00.224 [2024-12-06 18:19:25.641039] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:00.224 [2024-12-06 18:19:25.641069] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:00.224 [2024-12-06 18:19:25.644370] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:00.224 [2024-12-06 18:19:25.644450] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:00.224 [2024-12-06 18:19:25.644563] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:00.224 [2024-12-06 18:19:25.644662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:00.224 pt1 00:21:00.224 18:19:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.224 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:00.224 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:00.224 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:00.224 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:00.224 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:00.224 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:00.224 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:00.224 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:00.224 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:00.224 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:00.224 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.224 18:19:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.224 18:19:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:00.224 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.224 18:19:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.224 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:00.224 "name": "raid_bdev1", 00:21:00.224 "uuid": "c9833ab4-ca57-4a7e-8d92-a0b7c0ea27b2", 00:21:00.224 "strip_size_kb": 0, 00:21:00.224 "state": "configuring", 00:21:00.224 "raid_level": "raid1", 00:21:00.224 "superblock": true, 00:21:00.224 "num_base_bdevs": 2, 00:21:00.224 "num_base_bdevs_discovered": 1, 00:21:00.224 "num_base_bdevs_operational": 2, 00:21:00.224 "base_bdevs_list": [ 00:21:00.224 { 00:21:00.224 "name": "pt1", 00:21:00.224 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:00.224 "is_configured": true, 00:21:00.224 "data_offset": 256, 00:21:00.224 "data_size": 7936 00:21:00.224 }, 00:21:00.224 { 00:21:00.224 "name": null, 00:21:00.224 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:00.224 "is_configured": false, 00:21:00.224 "data_offset": 256, 00:21:00.224 "data_size": 7936 00:21:00.224 } 00:21:00.224 ] 00:21:00.224 }' 00:21:00.224 18:19:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:00.224 18:19:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:00.792 18:19:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:21:00.792 18:19:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:21:00.792 18:19:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:00.792 18:19:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:00.792 18:19:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.792 18:19:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:00.792 [2024-12-06 18:19:26.153128] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:00.792 [2024-12-06 18:19:26.153247] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:00.792 [2024-12-06 18:19:26.153280] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:00.792 [2024-12-06 18:19:26.153298] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:00.792 [2024-12-06 18:19:26.153891] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:00.792 [2024-12-06 18:19:26.153942] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:00.792 [2024-12-06 18:19:26.154065] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:00.792 [2024-12-06 18:19:26.154110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:00.792 [2024-12-06 18:19:26.154273] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:00.792 [2024-12-06 18:19:26.154297] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:00.792 [2024-12-06 18:19:26.154613] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:00.792 [2024-12-06 18:19:26.154856] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:00.792 [2024-12-06 18:19:26.154879] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:00.792 [2024-12-06 18:19:26.155063] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:00.792 pt2 00:21:00.792 18:19:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.792 18:19:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:00.792 18:19:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:00.792 18:19:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:00.792 18:19:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:00.792 18:19:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:00.792 18:19:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:00.792 18:19:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:00.792 18:19:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:00.792 18:19:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:00.792 18:19:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:00.792 18:19:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:00.792 18:19:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:00.792 18:19:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.792 18:19:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.792 18:19:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.792 18:19:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:00.792 18:19:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.792 18:19:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:00.792 "name": "raid_bdev1", 00:21:00.792 "uuid": "c9833ab4-ca57-4a7e-8d92-a0b7c0ea27b2", 00:21:00.792 "strip_size_kb": 0, 00:21:00.792 "state": "online", 00:21:00.792 "raid_level": "raid1", 00:21:00.792 "superblock": true, 00:21:00.792 "num_base_bdevs": 2, 00:21:00.792 "num_base_bdevs_discovered": 2, 00:21:00.792 "num_base_bdevs_operational": 2, 00:21:00.792 "base_bdevs_list": [ 00:21:00.792 { 00:21:00.792 "name": "pt1", 00:21:00.792 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:00.792 "is_configured": true, 00:21:00.792 "data_offset": 256, 00:21:00.792 "data_size": 7936 00:21:00.792 }, 00:21:00.792 { 00:21:00.792 "name": "pt2", 00:21:00.792 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:00.792 "is_configured": true, 00:21:00.792 "data_offset": 256, 00:21:00.792 "data_size": 7936 00:21:00.792 } 00:21:00.792 ] 00:21:00.792 }' 00:21:00.792 18:19:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:00.792 18:19:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:01.359 18:19:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:21:01.359 18:19:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:01.359 18:19:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:01.359 18:19:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:01.359 18:19:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:21:01.359 18:19:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:01.359 18:19:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:01.359 18:19:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.359 18:19:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:01.359 18:19:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:01.359 [2024-12-06 18:19:26.685812] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:01.359 18:19:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.359 18:19:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:01.359 "name": "raid_bdev1", 00:21:01.359 "aliases": [ 00:21:01.359 "c9833ab4-ca57-4a7e-8d92-a0b7c0ea27b2" 00:21:01.359 ], 00:21:01.359 "product_name": "Raid Volume", 00:21:01.359 "block_size": 4096, 00:21:01.359 "num_blocks": 7936, 00:21:01.359 "uuid": "c9833ab4-ca57-4a7e-8d92-a0b7c0ea27b2", 00:21:01.359 "assigned_rate_limits": { 00:21:01.359 "rw_ios_per_sec": 0, 00:21:01.359 "rw_mbytes_per_sec": 0, 00:21:01.359 "r_mbytes_per_sec": 0, 00:21:01.359 "w_mbytes_per_sec": 0 00:21:01.359 }, 00:21:01.359 "claimed": false, 00:21:01.359 "zoned": false, 00:21:01.359 "supported_io_types": { 00:21:01.359 "read": true, 00:21:01.359 "write": true, 00:21:01.359 "unmap": false, 00:21:01.359 "flush": false, 00:21:01.359 "reset": true, 00:21:01.359 "nvme_admin": false, 00:21:01.359 "nvme_io": false, 00:21:01.359 "nvme_io_md": false, 00:21:01.359 "write_zeroes": true, 00:21:01.359 "zcopy": false, 00:21:01.359 "get_zone_info": false, 00:21:01.359 "zone_management": false, 00:21:01.359 "zone_append": false, 00:21:01.359 "compare": false, 00:21:01.359 "compare_and_write": false, 00:21:01.359 "abort": false, 00:21:01.359 "seek_hole": false, 00:21:01.359 "seek_data": false, 00:21:01.359 "copy": false, 00:21:01.359 "nvme_iov_md": false 00:21:01.359 }, 00:21:01.359 "memory_domains": [ 00:21:01.359 { 00:21:01.359 "dma_device_id": "system", 00:21:01.359 "dma_device_type": 1 00:21:01.359 }, 00:21:01.359 { 00:21:01.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:01.359 "dma_device_type": 2 00:21:01.359 }, 00:21:01.359 { 00:21:01.359 "dma_device_id": "system", 00:21:01.359 "dma_device_type": 1 00:21:01.359 }, 00:21:01.359 { 00:21:01.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:01.359 "dma_device_type": 2 00:21:01.359 } 00:21:01.359 ], 00:21:01.359 "driver_specific": { 00:21:01.359 "raid": { 00:21:01.359 "uuid": "c9833ab4-ca57-4a7e-8d92-a0b7c0ea27b2", 00:21:01.359 "strip_size_kb": 0, 00:21:01.359 "state": "online", 00:21:01.359 "raid_level": "raid1", 00:21:01.359 "superblock": true, 00:21:01.359 "num_base_bdevs": 2, 00:21:01.359 "num_base_bdevs_discovered": 2, 00:21:01.359 "num_base_bdevs_operational": 2, 00:21:01.359 "base_bdevs_list": [ 00:21:01.359 { 00:21:01.359 "name": "pt1", 00:21:01.359 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:01.359 "is_configured": true, 00:21:01.359 "data_offset": 256, 00:21:01.359 "data_size": 7936 00:21:01.359 }, 00:21:01.359 { 00:21:01.359 "name": "pt2", 00:21:01.359 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:01.359 "is_configured": true, 00:21:01.359 "data_offset": 256, 00:21:01.359 "data_size": 7936 00:21:01.359 } 00:21:01.359 ] 00:21:01.359 } 00:21:01.359 } 00:21:01.359 }' 00:21:01.359 18:19:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:01.359 18:19:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:01.359 pt2' 00:21:01.359 18:19:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:01.359 18:19:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:21:01.359 18:19:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:01.359 18:19:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:01.359 18:19:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:01.359 18:19:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.359 18:19:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:01.359 18:19:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.619 18:19:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:21:01.619 18:19:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:21:01.619 18:19:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:01.619 18:19:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:01.619 18:19:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.619 18:19:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:01.619 18:19:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:01.619 18:19:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.619 18:19:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:21:01.619 18:19:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:21:01.619 18:19:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:01.619 18:19:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.619 18:19:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:01.619 18:19:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:21:01.619 [2024-12-06 18:19:26.941834] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:01.619 18:19:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.619 18:19:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' c9833ab4-ca57-4a7e-8d92-a0b7c0ea27b2 '!=' c9833ab4-ca57-4a7e-8d92-a0b7c0ea27b2 ']' 00:21:01.619 18:19:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:21:01.619 18:19:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:01.619 18:19:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:21:01.619 18:19:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:21:01.619 18:19:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.619 18:19:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:01.619 [2024-12-06 18:19:26.997609] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:01.619 18:19:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.619 18:19:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:01.619 18:19:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:01.619 18:19:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:01.619 18:19:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:01.619 18:19:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:01.619 18:19:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:01.619 18:19:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:01.619 18:19:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:01.619 18:19:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:01.619 18:19:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:01.619 18:19:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.619 18:19:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.619 18:19:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:01.619 18:19:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:01.619 18:19:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.619 18:19:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:01.619 "name": "raid_bdev1", 00:21:01.619 "uuid": "c9833ab4-ca57-4a7e-8d92-a0b7c0ea27b2", 00:21:01.619 "strip_size_kb": 0, 00:21:01.619 "state": "online", 00:21:01.619 "raid_level": "raid1", 00:21:01.619 "superblock": true, 00:21:01.619 "num_base_bdevs": 2, 00:21:01.619 "num_base_bdevs_discovered": 1, 00:21:01.620 "num_base_bdevs_operational": 1, 00:21:01.620 "base_bdevs_list": [ 00:21:01.620 { 00:21:01.620 "name": null, 00:21:01.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.620 "is_configured": false, 00:21:01.620 "data_offset": 0, 00:21:01.620 "data_size": 7936 00:21:01.620 }, 00:21:01.620 { 00:21:01.620 "name": "pt2", 00:21:01.620 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:01.620 "is_configured": true, 00:21:01.620 "data_offset": 256, 00:21:01.620 "data_size": 7936 00:21:01.620 } 00:21:01.620 ] 00:21:01.620 }' 00:21:01.620 18:19:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:01.620 18:19:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:02.188 18:19:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:02.188 18:19:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.188 18:19:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:02.188 [2024-12-06 18:19:27.521768] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:02.188 [2024-12-06 18:19:27.521814] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:02.188 [2024-12-06 18:19:27.521960] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:02.188 [2024-12-06 18:19:27.522030] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:02.188 [2024-12-06 18:19:27.522052] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:02.188 18:19:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.188 18:19:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:21:02.188 18:19:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.188 18:19:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.188 18:19:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:02.188 18:19:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.188 18:19:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:21:02.188 18:19:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:21:02.188 18:19:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:21:02.188 18:19:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:02.188 18:19:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:21:02.188 18:19:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.188 18:19:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:02.188 18:19:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.188 18:19:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:02.188 18:19:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:02.188 18:19:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:21:02.188 18:19:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:02.188 18:19:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:21:02.188 18:19:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:02.188 18:19:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.188 18:19:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:02.188 [2024-12-06 18:19:27.593745] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:02.189 [2024-12-06 18:19:27.593834] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:02.189 [2024-12-06 18:19:27.593867] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:21:02.189 [2024-12-06 18:19:27.593884] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:02.189 [2024-12-06 18:19:27.596952] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:02.189 [2024-12-06 18:19:27.597006] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:02.189 [2024-12-06 18:19:27.597115] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:02.189 [2024-12-06 18:19:27.597183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:02.189 [2024-12-06 18:19:27.597318] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:02.189 [2024-12-06 18:19:27.597340] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:02.189 [2024-12-06 18:19:27.597650] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:02.189 [2024-12-06 18:19:27.597885] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:02.189 [2024-12-06 18:19:27.597902] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:21:02.189 [2024-12-06 18:19:27.598141] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:02.189 pt2 00:21:02.189 18:19:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.189 18:19:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:02.189 18:19:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:02.189 18:19:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:02.189 18:19:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:02.189 18:19:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:02.189 18:19:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:02.189 18:19:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:02.189 18:19:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:02.189 18:19:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:02.189 18:19:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:02.189 18:19:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:02.189 18:19:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.189 18:19:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.189 18:19:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:02.189 18:19:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.189 18:19:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:02.189 "name": "raid_bdev1", 00:21:02.189 "uuid": "c9833ab4-ca57-4a7e-8d92-a0b7c0ea27b2", 00:21:02.189 "strip_size_kb": 0, 00:21:02.189 "state": "online", 00:21:02.189 "raid_level": "raid1", 00:21:02.189 "superblock": true, 00:21:02.189 "num_base_bdevs": 2, 00:21:02.189 "num_base_bdevs_discovered": 1, 00:21:02.189 "num_base_bdevs_operational": 1, 00:21:02.189 "base_bdevs_list": [ 00:21:02.189 { 00:21:02.189 "name": null, 00:21:02.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.189 "is_configured": false, 00:21:02.189 "data_offset": 256, 00:21:02.189 "data_size": 7936 00:21:02.189 }, 00:21:02.189 { 00:21:02.189 "name": "pt2", 00:21:02.189 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:02.189 "is_configured": true, 00:21:02.189 "data_offset": 256, 00:21:02.189 "data_size": 7936 00:21:02.189 } 00:21:02.189 ] 00:21:02.189 }' 00:21:02.189 18:19:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:02.189 18:19:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:02.817 18:19:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:02.817 18:19:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.817 18:19:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:02.817 [2024-12-06 18:19:28.150233] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:02.817 [2024-12-06 18:19:28.150270] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:02.817 [2024-12-06 18:19:28.150372] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:02.817 [2024-12-06 18:19:28.150454] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:02.817 [2024-12-06 18:19:28.150494] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:21:02.817 18:19:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.817 18:19:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:21:02.817 18:19:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.817 18:19:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.817 18:19:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:02.817 18:19:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.817 18:19:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:21:02.817 18:19:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:21:02.817 18:19:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:21:02.817 18:19:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:02.817 18:19:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.817 18:19:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:02.817 [2024-12-06 18:19:28.214217] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:02.817 [2024-12-06 18:19:28.214284] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:02.817 [2024-12-06 18:19:28.214314] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:21:02.817 [2024-12-06 18:19:28.214329] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:02.817 [2024-12-06 18:19:28.217216] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:02.817 [2024-12-06 18:19:28.217397] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:02.817 [2024-12-06 18:19:28.217511] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:02.817 [2024-12-06 18:19:28.217571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:02.817 [2024-12-06 18:19:28.217751] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:02.817 [2024-12-06 18:19:28.217786] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:02.817 [2024-12-06 18:19:28.217812] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:21:02.817 [2024-12-06 18:19:28.217882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:02.817 [2024-12-06 18:19:28.217986] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:21:02.817 [2024-12-06 18:19:28.218001] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:02.817 [2024-12-06 18:19:28.218364] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:02.817 [2024-12-06 18:19:28.218551] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:21:02.817 [2024-12-06 18:19:28.218578] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:21:02.817 pt1 00:21:02.817 [2024-12-06 18:19:28.218834] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:02.817 18:19:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.817 18:19:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:21:02.817 18:19:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:02.817 18:19:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:02.817 18:19:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:02.817 18:19:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:02.817 18:19:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:02.817 18:19:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:02.818 18:19:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:02.818 18:19:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:02.818 18:19:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:02.818 18:19:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:02.818 18:19:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.818 18:19:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:02.818 18:19:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.818 18:19:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:02.818 18:19:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.818 18:19:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:02.818 "name": "raid_bdev1", 00:21:02.818 "uuid": "c9833ab4-ca57-4a7e-8d92-a0b7c0ea27b2", 00:21:02.818 "strip_size_kb": 0, 00:21:02.818 "state": "online", 00:21:02.818 "raid_level": "raid1", 00:21:02.818 "superblock": true, 00:21:02.818 "num_base_bdevs": 2, 00:21:02.818 "num_base_bdevs_discovered": 1, 00:21:02.818 "num_base_bdevs_operational": 1, 00:21:02.818 "base_bdevs_list": [ 00:21:02.818 { 00:21:02.818 "name": null, 00:21:02.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.818 "is_configured": false, 00:21:02.818 "data_offset": 256, 00:21:02.818 "data_size": 7936 00:21:02.818 }, 00:21:02.818 { 00:21:02.818 "name": "pt2", 00:21:02.818 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:02.818 "is_configured": true, 00:21:02.818 "data_offset": 256, 00:21:02.818 "data_size": 7936 00:21:02.818 } 00:21:02.818 ] 00:21:02.818 }' 00:21:02.818 18:19:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:02.818 18:19:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:03.386 18:19:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:21:03.386 18:19:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.386 18:19:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:03.386 18:19:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:03.386 18:19:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.386 18:19:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:21:03.386 18:19:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:21:03.386 18:19:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:03.386 18:19:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.386 18:19:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:03.386 [2024-12-06 18:19:28.787260] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:03.386 18:19:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.386 18:19:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' c9833ab4-ca57-4a7e-8d92-a0b7c0ea27b2 '!=' c9833ab4-ca57-4a7e-8d92-a0b7c0ea27b2 ']' 00:21:03.386 18:19:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86644 00:21:03.386 18:19:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86644 ']' 00:21:03.386 18:19:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86644 00:21:03.386 18:19:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:21:03.386 18:19:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:03.386 18:19:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86644 00:21:03.386 18:19:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:03.386 18:19:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:03.386 18:19:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86644' 00:21:03.386 killing process with pid 86644 00:21:03.387 18:19:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86644 00:21:03.387 [2024-12-06 18:19:28.869797] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:03.387 [2024-12-06 18:19:28.869917] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:03.387 18:19:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86644 00:21:03.387 [2024-12-06 18:19:28.869984] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:03.387 [2024-12-06 18:19:28.870009] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:21:03.645 [2024-12-06 18:19:29.064758] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:05.020 18:19:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:21:05.020 00:21:05.020 real 0m6.884s 00:21:05.020 user 0m10.848s 00:21:05.020 sys 0m0.964s 00:21:05.020 18:19:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:05.020 ************************************ 00:21:05.020 END TEST raid_superblock_test_4k 00:21:05.020 18:19:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:05.020 ************************************ 00:21:05.020 18:19:30 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:21:05.020 18:19:30 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:21:05.020 18:19:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:21:05.020 18:19:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:05.020 18:19:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:05.020 ************************************ 00:21:05.020 START TEST raid_rebuild_test_sb_4k 00:21:05.020 ************************************ 00:21:05.020 18:19:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:21:05.020 18:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:21:05.020 18:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:21:05.020 18:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:21:05.020 18:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:21:05.020 18:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:21:05.020 18:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:21:05.020 18:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:05.020 18:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:21:05.020 18:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:05.020 18:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:05.020 18:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:21:05.020 18:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:05.020 18:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:05.020 18:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:05.020 18:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:21:05.020 18:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:21:05.020 18:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:21:05.020 18:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:21:05.020 18:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:21:05.020 18:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:21:05.020 18:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:21:05.020 18:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:21:05.020 18:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:21:05.020 18:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:21:05.020 18:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86977 00:21:05.020 18:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:05.020 18:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86977 00:21:05.020 18:19:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86977 ']' 00:21:05.020 18:19:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:05.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:05.020 18:19:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:05.020 18:19:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:05.020 18:19:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:05.020 18:19:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:05.020 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:05.020 Zero copy mechanism will not be used. 00:21:05.020 [2024-12-06 18:19:30.398306] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:21:05.020 [2024-12-06 18:19:30.398480] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86977 ] 00:21:05.278 [2024-12-06 18:19:30.591425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.278 [2024-12-06 18:19:30.777380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:05.598 [2024-12-06 18:19:31.042888] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:05.598 [2024-12-06 18:19:31.043002] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:06.166 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:06.166 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:21:06.166 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:06.166 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:21:06.166 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.166 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:06.166 BaseBdev1_malloc 00:21:06.166 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.166 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:06.166 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.166 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:06.166 [2024-12-06 18:19:31.477784] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:06.166 [2024-12-06 18:19:31.477859] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:06.166 [2024-12-06 18:19:31.477890] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:06.166 [2024-12-06 18:19:31.477909] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:06.166 [2024-12-06 18:19:31.480656] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:06.166 [2024-12-06 18:19:31.480709] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:06.166 BaseBdev1 00:21:06.166 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.166 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:06.166 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:21:06.166 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.166 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:06.166 BaseBdev2_malloc 00:21:06.166 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.166 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:06.166 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.166 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:06.166 [2024-12-06 18:19:31.529562] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:06.167 [2024-12-06 18:19:31.529822] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:06.167 [2024-12-06 18:19:31.529868] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:06.167 [2024-12-06 18:19:31.529893] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:06.167 [2024-12-06 18:19:31.532793] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:06.167 [2024-12-06 18:19:31.532843] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:06.167 BaseBdev2 00:21:06.167 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.167 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:21:06.167 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.167 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:06.167 spare_malloc 00:21:06.167 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.167 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:06.167 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.167 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:06.167 spare_delay 00:21:06.167 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.167 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:06.167 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.167 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:06.167 [2024-12-06 18:19:31.603215] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:06.167 [2024-12-06 18:19:31.603292] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:06.167 [2024-12-06 18:19:31.603322] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:06.167 [2024-12-06 18:19:31.603339] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:06.167 [2024-12-06 18:19:31.606115] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:06.167 [2024-12-06 18:19:31.606166] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:06.167 spare 00:21:06.167 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.167 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:21:06.167 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.167 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:06.167 [2024-12-06 18:19:31.611288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:06.167 [2024-12-06 18:19:31.613647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:06.167 [2024-12-06 18:19:31.613922] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:06.167 [2024-12-06 18:19:31.613946] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:06.167 [2024-12-06 18:19:31.614245] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:06.167 [2024-12-06 18:19:31.614466] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:06.167 [2024-12-06 18:19:31.614482] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:06.167 [2024-12-06 18:19:31.614689] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:06.167 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.167 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:06.167 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:06.167 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:06.167 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:06.167 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:06.167 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:06.167 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:06.167 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:06.167 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:06.167 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:06.167 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.167 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:06.167 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.167 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:06.167 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.167 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:06.167 "name": "raid_bdev1", 00:21:06.167 "uuid": "c4c8836c-469e-48e8-b4f8-df2a6313e263", 00:21:06.167 "strip_size_kb": 0, 00:21:06.167 "state": "online", 00:21:06.167 "raid_level": "raid1", 00:21:06.167 "superblock": true, 00:21:06.167 "num_base_bdevs": 2, 00:21:06.167 "num_base_bdevs_discovered": 2, 00:21:06.167 "num_base_bdevs_operational": 2, 00:21:06.167 "base_bdevs_list": [ 00:21:06.167 { 00:21:06.167 "name": "BaseBdev1", 00:21:06.167 "uuid": "25cfe4ae-2eb8-5166-a74f-c0cacb8c4a5f", 00:21:06.167 "is_configured": true, 00:21:06.167 "data_offset": 256, 00:21:06.167 "data_size": 7936 00:21:06.167 }, 00:21:06.167 { 00:21:06.167 "name": "BaseBdev2", 00:21:06.167 "uuid": "c3efc178-2941-50e6-a019-70b339a6a21a", 00:21:06.167 "is_configured": true, 00:21:06.167 "data_offset": 256, 00:21:06.167 "data_size": 7936 00:21:06.167 } 00:21:06.167 ] 00:21:06.167 }' 00:21:06.167 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:06.167 18:19:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:06.734 18:19:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:06.734 18:19:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:21:06.734 18:19:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.734 18:19:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:06.734 [2024-12-06 18:19:32.143790] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:06.734 18:19:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.734 18:19:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:21:06.734 18:19:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.734 18:19:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.734 18:19:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:06.734 18:19:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:06.734 18:19:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.734 18:19:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:21:06.734 18:19:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:21:06.734 18:19:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:21:06.734 18:19:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:21:06.734 18:19:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:21:06.734 18:19:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:06.734 18:19:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:06.734 18:19:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:06.734 18:19:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:06.734 18:19:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:06.734 18:19:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:21:06.734 18:19:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:06.734 18:19:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:06.734 18:19:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:07.301 [2024-12-06 18:19:32.575669] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:07.301 /dev/nbd0 00:21:07.301 18:19:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:07.301 18:19:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:07.301 18:19:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:07.301 18:19:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:21:07.301 18:19:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:07.301 18:19:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:07.301 18:19:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:07.301 18:19:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:21:07.301 18:19:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:07.301 18:19:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:07.301 18:19:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:07.301 1+0 records in 00:21:07.301 1+0 records out 00:21:07.301 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000334108 s, 12.3 MB/s 00:21:07.301 18:19:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:07.301 18:19:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:21:07.301 18:19:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:07.301 18:19:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:07.301 18:19:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:21:07.301 18:19:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:07.301 18:19:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:07.301 18:19:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:21:07.301 18:19:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:21:07.301 18:19:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:21:08.255 7936+0 records in 00:21:08.255 7936+0 records out 00:21:08.255 32505856 bytes (33 MB, 31 MiB) copied, 0.925849 s, 35.1 MB/s 00:21:08.255 18:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:21:08.255 18:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:08.255 18:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:08.255 18:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:08.255 18:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:21:08.255 18:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:08.255 18:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:08.514 [2024-12-06 18:19:33.893176] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:08.514 18:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:08.514 18:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:08.514 18:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:08.514 18:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:08.514 18:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:08.514 18:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:08.514 18:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:21:08.514 18:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:21:08.514 18:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:21:08.514 18:19:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.514 18:19:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:08.514 [2024-12-06 18:19:33.909877] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:08.514 18:19:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.514 18:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:08.514 18:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:08.514 18:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:08.514 18:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:08.514 18:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:08.514 18:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:08.514 18:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:08.514 18:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:08.514 18:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:08.514 18:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:08.514 18:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:08.514 18:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.514 18:19:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.514 18:19:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:08.514 18:19:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.514 18:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:08.514 "name": "raid_bdev1", 00:21:08.514 "uuid": "c4c8836c-469e-48e8-b4f8-df2a6313e263", 00:21:08.514 "strip_size_kb": 0, 00:21:08.514 "state": "online", 00:21:08.514 "raid_level": "raid1", 00:21:08.514 "superblock": true, 00:21:08.514 "num_base_bdevs": 2, 00:21:08.514 "num_base_bdevs_discovered": 1, 00:21:08.514 "num_base_bdevs_operational": 1, 00:21:08.514 "base_bdevs_list": [ 00:21:08.514 { 00:21:08.514 "name": null, 00:21:08.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.514 "is_configured": false, 00:21:08.514 "data_offset": 0, 00:21:08.514 "data_size": 7936 00:21:08.514 }, 00:21:08.514 { 00:21:08.514 "name": "BaseBdev2", 00:21:08.514 "uuid": "c3efc178-2941-50e6-a019-70b339a6a21a", 00:21:08.514 "is_configured": true, 00:21:08.514 "data_offset": 256, 00:21:08.514 "data_size": 7936 00:21:08.514 } 00:21:08.514 ] 00:21:08.514 }' 00:21:08.514 18:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:08.514 18:19:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:09.083 18:19:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:09.083 18:19:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.083 18:19:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:09.083 [2024-12-06 18:19:34.410057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:09.083 [2024-12-06 18:19:34.427706] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:21:09.083 18:19:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.083 18:19:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:21:09.083 [2024-12-06 18:19:34.430270] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:10.019 18:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:10.019 18:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:10.019 18:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:10.019 18:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:10.019 18:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:10.019 18:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.019 18:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.019 18:19:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.019 18:19:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:10.019 18:19:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.019 18:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:10.019 "name": "raid_bdev1", 00:21:10.019 "uuid": "c4c8836c-469e-48e8-b4f8-df2a6313e263", 00:21:10.019 "strip_size_kb": 0, 00:21:10.019 "state": "online", 00:21:10.019 "raid_level": "raid1", 00:21:10.019 "superblock": true, 00:21:10.019 "num_base_bdevs": 2, 00:21:10.019 "num_base_bdevs_discovered": 2, 00:21:10.019 "num_base_bdevs_operational": 2, 00:21:10.019 "process": { 00:21:10.019 "type": "rebuild", 00:21:10.019 "target": "spare", 00:21:10.019 "progress": { 00:21:10.019 "blocks": 2560, 00:21:10.019 "percent": 32 00:21:10.019 } 00:21:10.019 }, 00:21:10.019 "base_bdevs_list": [ 00:21:10.019 { 00:21:10.019 "name": "spare", 00:21:10.019 "uuid": "da2c6958-fdbd-543b-8b10-0488a5ae9dff", 00:21:10.019 "is_configured": true, 00:21:10.019 "data_offset": 256, 00:21:10.019 "data_size": 7936 00:21:10.019 }, 00:21:10.019 { 00:21:10.019 "name": "BaseBdev2", 00:21:10.019 "uuid": "c3efc178-2941-50e6-a019-70b339a6a21a", 00:21:10.019 "is_configured": true, 00:21:10.019 "data_offset": 256, 00:21:10.019 "data_size": 7936 00:21:10.019 } 00:21:10.019 ] 00:21:10.019 }' 00:21:10.019 18:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:10.277 18:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:10.277 18:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:10.277 18:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:10.277 18:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:10.277 18:19:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.277 18:19:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:10.277 [2024-12-06 18:19:35.599847] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:10.277 [2024-12-06 18:19:35.639074] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:10.277 [2024-12-06 18:19:35.639327] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:10.277 [2024-12-06 18:19:35.639355] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:10.277 [2024-12-06 18:19:35.639370] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:10.277 18:19:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.277 18:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:10.277 18:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:10.277 18:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:10.277 18:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:10.277 18:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:10.277 18:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:10.277 18:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:10.278 18:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:10.278 18:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:10.278 18:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:10.278 18:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.278 18:19:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.278 18:19:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:10.278 18:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.278 18:19:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.278 18:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:10.278 "name": "raid_bdev1", 00:21:10.278 "uuid": "c4c8836c-469e-48e8-b4f8-df2a6313e263", 00:21:10.278 "strip_size_kb": 0, 00:21:10.278 "state": "online", 00:21:10.278 "raid_level": "raid1", 00:21:10.278 "superblock": true, 00:21:10.278 "num_base_bdevs": 2, 00:21:10.278 "num_base_bdevs_discovered": 1, 00:21:10.278 "num_base_bdevs_operational": 1, 00:21:10.278 "base_bdevs_list": [ 00:21:10.278 { 00:21:10.278 "name": null, 00:21:10.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.278 "is_configured": false, 00:21:10.278 "data_offset": 0, 00:21:10.278 "data_size": 7936 00:21:10.278 }, 00:21:10.278 { 00:21:10.278 "name": "BaseBdev2", 00:21:10.278 "uuid": "c3efc178-2941-50e6-a019-70b339a6a21a", 00:21:10.278 "is_configured": true, 00:21:10.278 "data_offset": 256, 00:21:10.278 "data_size": 7936 00:21:10.278 } 00:21:10.278 ] 00:21:10.278 }' 00:21:10.278 18:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:10.278 18:19:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:10.845 18:19:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:10.845 18:19:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:10.845 18:19:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:10.845 18:19:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:10.845 18:19:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:10.845 18:19:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.845 18:19:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.845 18:19:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:10.845 18:19:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.845 18:19:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.845 18:19:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:10.845 "name": "raid_bdev1", 00:21:10.845 "uuid": "c4c8836c-469e-48e8-b4f8-df2a6313e263", 00:21:10.845 "strip_size_kb": 0, 00:21:10.845 "state": "online", 00:21:10.845 "raid_level": "raid1", 00:21:10.845 "superblock": true, 00:21:10.845 "num_base_bdevs": 2, 00:21:10.845 "num_base_bdevs_discovered": 1, 00:21:10.845 "num_base_bdevs_operational": 1, 00:21:10.845 "base_bdevs_list": [ 00:21:10.845 { 00:21:10.845 "name": null, 00:21:10.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.845 "is_configured": false, 00:21:10.845 "data_offset": 0, 00:21:10.845 "data_size": 7936 00:21:10.845 }, 00:21:10.845 { 00:21:10.845 "name": "BaseBdev2", 00:21:10.845 "uuid": "c3efc178-2941-50e6-a019-70b339a6a21a", 00:21:10.845 "is_configured": true, 00:21:10.845 "data_offset": 256, 00:21:10.845 "data_size": 7936 00:21:10.845 } 00:21:10.845 ] 00:21:10.845 }' 00:21:10.845 18:19:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:10.845 18:19:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:10.845 18:19:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:10.845 18:19:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:10.845 18:19:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:10.845 18:19:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.845 18:19:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:10.845 [2024-12-06 18:19:36.351678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:11.104 [2024-12-06 18:19:36.367487] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:21:11.104 18:19:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.104 18:19:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:11.104 [2024-12-06 18:19:36.370037] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:12.040 18:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:12.040 18:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:12.040 18:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:12.040 18:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:12.040 18:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:12.040 18:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:12.040 18:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:12.040 18:19:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.040 18:19:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:12.040 18:19:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.040 18:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:12.040 "name": "raid_bdev1", 00:21:12.040 "uuid": "c4c8836c-469e-48e8-b4f8-df2a6313e263", 00:21:12.040 "strip_size_kb": 0, 00:21:12.040 "state": "online", 00:21:12.040 "raid_level": "raid1", 00:21:12.040 "superblock": true, 00:21:12.040 "num_base_bdevs": 2, 00:21:12.040 "num_base_bdevs_discovered": 2, 00:21:12.040 "num_base_bdevs_operational": 2, 00:21:12.040 "process": { 00:21:12.040 "type": "rebuild", 00:21:12.040 "target": "spare", 00:21:12.040 "progress": { 00:21:12.040 "blocks": 2560, 00:21:12.040 "percent": 32 00:21:12.040 } 00:21:12.040 }, 00:21:12.040 "base_bdevs_list": [ 00:21:12.040 { 00:21:12.040 "name": "spare", 00:21:12.040 "uuid": "da2c6958-fdbd-543b-8b10-0488a5ae9dff", 00:21:12.040 "is_configured": true, 00:21:12.040 "data_offset": 256, 00:21:12.040 "data_size": 7936 00:21:12.040 }, 00:21:12.040 { 00:21:12.040 "name": "BaseBdev2", 00:21:12.040 "uuid": "c3efc178-2941-50e6-a019-70b339a6a21a", 00:21:12.040 "is_configured": true, 00:21:12.040 "data_offset": 256, 00:21:12.040 "data_size": 7936 00:21:12.040 } 00:21:12.040 ] 00:21:12.040 }' 00:21:12.040 18:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:12.040 18:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:12.040 18:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:12.040 18:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:12.040 18:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:21:12.040 18:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:21:12.040 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:21:12.040 18:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:21:12.040 18:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:21:12.040 18:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:21:12.040 18:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=733 00:21:12.040 18:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:12.040 18:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:12.040 18:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:12.040 18:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:12.040 18:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:12.040 18:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:12.040 18:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:12.040 18:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:12.040 18:19:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.040 18:19:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:12.040 18:19:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.300 18:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:12.300 "name": "raid_bdev1", 00:21:12.300 "uuid": "c4c8836c-469e-48e8-b4f8-df2a6313e263", 00:21:12.300 "strip_size_kb": 0, 00:21:12.300 "state": "online", 00:21:12.300 "raid_level": "raid1", 00:21:12.300 "superblock": true, 00:21:12.300 "num_base_bdevs": 2, 00:21:12.300 "num_base_bdevs_discovered": 2, 00:21:12.300 "num_base_bdevs_operational": 2, 00:21:12.300 "process": { 00:21:12.300 "type": "rebuild", 00:21:12.300 "target": "spare", 00:21:12.300 "progress": { 00:21:12.300 "blocks": 2816, 00:21:12.300 "percent": 35 00:21:12.300 } 00:21:12.300 }, 00:21:12.300 "base_bdevs_list": [ 00:21:12.300 { 00:21:12.300 "name": "spare", 00:21:12.300 "uuid": "da2c6958-fdbd-543b-8b10-0488a5ae9dff", 00:21:12.300 "is_configured": true, 00:21:12.300 "data_offset": 256, 00:21:12.300 "data_size": 7936 00:21:12.300 }, 00:21:12.300 { 00:21:12.300 "name": "BaseBdev2", 00:21:12.300 "uuid": "c3efc178-2941-50e6-a019-70b339a6a21a", 00:21:12.300 "is_configured": true, 00:21:12.300 "data_offset": 256, 00:21:12.300 "data_size": 7936 00:21:12.300 } 00:21:12.300 ] 00:21:12.300 }' 00:21:12.300 18:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:12.300 18:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:12.300 18:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:12.300 18:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:12.300 18:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:13.239 18:19:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:13.239 18:19:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:13.239 18:19:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:13.239 18:19:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:13.239 18:19:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:13.239 18:19:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:13.239 18:19:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.239 18:19:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.239 18:19:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.239 18:19:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:13.239 18:19:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.498 18:19:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:13.498 "name": "raid_bdev1", 00:21:13.498 "uuid": "c4c8836c-469e-48e8-b4f8-df2a6313e263", 00:21:13.498 "strip_size_kb": 0, 00:21:13.498 "state": "online", 00:21:13.498 "raid_level": "raid1", 00:21:13.498 "superblock": true, 00:21:13.498 "num_base_bdevs": 2, 00:21:13.498 "num_base_bdevs_discovered": 2, 00:21:13.498 "num_base_bdevs_operational": 2, 00:21:13.498 "process": { 00:21:13.498 "type": "rebuild", 00:21:13.498 "target": "spare", 00:21:13.498 "progress": { 00:21:13.498 "blocks": 5888, 00:21:13.498 "percent": 74 00:21:13.498 } 00:21:13.498 }, 00:21:13.498 "base_bdevs_list": [ 00:21:13.498 { 00:21:13.498 "name": "spare", 00:21:13.498 "uuid": "da2c6958-fdbd-543b-8b10-0488a5ae9dff", 00:21:13.498 "is_configured": true, 00:21:13.498 "data_offset": 256, 00:21:13.498 "data_size": 7936 00:21:13.498 }, 00:21:13.498 { 00:21:13.498 "name": "BaseBdev2", 00:21:13.498 "uuid": "c3efc178-2941-50e6-a019-70b339a6a21a", 00:21:13.498 "is_configured": true, 00:21:13.498 "data_offset": 256, 00:21:13.498 "data_size": 7936 00:21:13.498 } 00:21:13.498 ] 00:21:13.498 }' 00:21:13.498 18:19:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:13.498 18:19:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:13.498 18:19:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:13.498 18:19:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:13.498 18:19:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:14.186 [2024-12-06 18:19:39.491625] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:14.186 [2024-12-06 18:19:39.491975] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:14.186 [2024-12-06 18:19:39.492154] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:14.485 18:19:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:14.485 18:19:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:14.485 18:19:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:14.485 18:19:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:14.485 18:19:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:14.485 18:19:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:14.485 18:19:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.485 18:19:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.485 18:19:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.485 18:19:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:14.485 18:19:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.485 18:19:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:14.485 "name": "raid_bdev1", 00:21:14.485 "uuid": "c4c8836c-469e-48e8-b4f8-df2a6313e263", 00:21:14.485 "strip_size_kb": 0, 00:21:14.485 "state": "online", 00:21:14.485 "raid_level": "raid1", 00:21:14.485 "superblock": true, 00:21:14.485 "num_base_bdevs": 2, 00:21:14.485 "num_base_bdevs_discovered": 2, 00:21:14.485 "num_base_bdevs_operational": 2, 00:21:14.485 "base_bdevs_list": [ 00:21:14.485 { 00:21:14.485 "name": "spare", 00:21:14.485 "uuid": "da2c6958-fdbd-543b-8b10-0488a5ae9dff", 00:21:14.485 "is_configured": true, 00:21:14.485 "data_offset": 256, 00:21:14.485 "data_size": 7936 00:21:14.485 }, 00:21:14.485 { 00:21:14.485 "name": "BaseBdev2", 00:21:14.485 "uuid": "c3efc178-2941-50e6-a019-70b339a6a21a", 00:21:14.485 "is_configured": true, 00:21:14.485 "data_offset": 256, 00:21:14.485 "data_size": 7936 00:21:14.485 } 00:21:14.485 ] 00:21:14.485 }' 00:21:14.485 18:19:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:14.763 18:19:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:14.763 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:14.763 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:14.763 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:21:14.763 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:14.763 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:14.763 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:14.763 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:14.763 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:14.763 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.763 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.763 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.763 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:14.763 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.763 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:14.763 "name": "raid_bdev1", 00:21:14.763 "uuid": "c4c8836c-469e-48e8-b4f8-df2a6313e263", 00:21:14.763 "strip_size_kb": 0, 00:21:14.764 "state": "online", 00:21:14.764 "raid_level": "raid1", 00:21:14.764 "superblock": true, 00:21:14.764 "num_base_bdevs": 2, 00:21:14.764 "num_base_bdevs_discovered": 2, 00:21:14.764 "num_base_bdevs_operational": 2, 00:21:14.764 "base_bdevs_list": [ 00:21:14.764 { 00:21:14.764 "name": "spare", 00:21:14.764 "uuid": "da2c6958-fdbd-543b-8b10-0488a5ae9dff", 00:21:14.764 "is_configured": true, 00:21:14.764 "data_offset": 256, 00:21:14.764 "data_size": 7936 00:21:14.764 }, 00:21:14.764 { 00:21:14.764 "name": "BaseBdev2", 00:21:14.764 "uuid": "c3efc178-2941-50e6-a019-70b339a6a21a", 00:21:14.764 "is_configured": true, 00:21:14.764 "data_offset": 256, 00:21:14.764 "data_size": 7936 00:21:14.764 } 00:21:14.764 ] 00:21:14.764 }' 00:21:14.764 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:14.764 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:14.764 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:14.764 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:14.764 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:14.764 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:14.764 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:14.764 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:14.764 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:14.764 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:14.764 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:14.764 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:14.764 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:14.764 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:14.764 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.764 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.764 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.764 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:14.764 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.764 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:14.764 "name": "raid_bdev1", 00:21:14.764 "uuid": "c4c8836c-469e-48e8-b4f8-df2a6313e263", 00:21:14.764 "strip_size_kb": 0, 00:21:14.764 "state": "online", 00:21:14.764 "raid_level": "raid1", 00:21:14.764 "superblock": true, 00:21:14.764 "num_base_bdevs": 2, 00:21:14.764 "num_base_bdevs_discovered": 2, 00:21:14.764 "num_base_bdevs_operational": 2, 00:21:14.764 "base_bdevs_list": [ 00:21:14.764 { 00:21:14.764 "name": "spare", 00:21:14.764 "uuid": "da2c6958-fdbd-543b-8b10-0488a5ae9dff", 00:21:14.764 "is_configured": true, 00:21:14.764 "data_offset": 256, 00:21:14.764 "data_size": 7936 00:21:14.764 }, 00:21:14.764 { 00:21:14.764 "name": "BaseBdev2", 00:21:14.764 "uuid": "c3efc178-2941-50e6-a019-70b339a6a21a", 00:21:14.764 "is_configured": true, 00:21:14.764 "data_offset": 256, 00:21:14.764 "data_size": 7936 00:21:14.764 } 00:21:14.764 ] 00:21:14.764 }' 00:21:14.764 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:14.764 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:15.331 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:15.331 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.331 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:15.331 [2024-12-06 18:19:40.744926] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:15.331 [2024-12-06 18:19:40.744964] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:15.331 [2024-12-06 18:19:40.745064] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:15.331 [2024-12-06 18:19:40.745157] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:15.331 [2024-12-06 18:19:40.745178] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:15.331 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.331 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:15.331 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.331 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:21:15.331 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:15.331 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.331 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:15.331 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:15.331 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:21:15.331 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:15.331 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:15.331 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:15.331 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:15.331 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:15.331 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:15.331 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:21:15.331 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:15.331 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:15.332 18:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:15.590 /dev/nbd0 00:21:15.849 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:15.849 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:15.849 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:15.849 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:21:15.849 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:15.849 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:15.849 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:15.849 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:21:15.849 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:15.849 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:15.849 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:15.849 1+0 records in 00:21:15.849 1+0 records out 00:21:15.849 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401289 s, 10.2 MB/s 00:21:15.849 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:15.849 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:21:15.849 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:15.850 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:15.850 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:21:15.850 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:15.850 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:15.850 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:21:16.108 /dev/nbd1 00:21:16.108 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:16.108 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:16.108 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:16.108 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:21:16.109 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:16.109 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:16.109 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:16.109 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:21:16.109 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:16.109 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:16.109 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:16.109 1+0 records in 00:21:16.109 1+0 records out 00:21:16.109 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000356504 s, 11.5 MB/s 00:21:16.109 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:16.109 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:21:16.109 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:16.109 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:16.109 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:21:16.109 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:16.109 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:16.109 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:16.367 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:21:16.367 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:16.367 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:16.367 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:16.367 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:21:16.367 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:16.367 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:16.626 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:16.626 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:16.626 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:16.626 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:16.626 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:16.626 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:16.626 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:21:16.626 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:21:16.626 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:16.626 18:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:16.886 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:16.886 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:16.886 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:16.886 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:16.886 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:16.886 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:16.886 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:21:16.886 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:21:16.886 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:21:16.886 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:21:16.886 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.886 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:16.886 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.886 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:16.886 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.886 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:16.886 [2024-12-06 18:19:42.245581] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:16.886 [2024-12-06 18:19:42.245646] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:16.886 [2024-12-06 18:19:42.245683] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:16.886 [2024-12-06 18:19:42.245708] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:16.886 [2024-12-06 18:19:42.248564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:16.886 [2024-12-06 18:19:42.248620] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:16.886 [2024-12-06 18:19:42.248708] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:16.886 [2024-12-06 18:19:42.248791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:16.886 [2024-12-06 18:19:42.249008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:16.886 spare 00:21:16.886 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.886 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:21:16.886 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.886 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:16.886 [2024-12-06 18:19:42.349157] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:21:16.886 [2024-12-06 18:19:42.349243] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:16.886 [2024-12-06 18:19:42.349660] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:21:16.886 [2024-12-06 18:19:42.349981] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:21:16.886 [2024-12-06 18:19:42.349999] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:21:16.886 [2024-12-06 18:19:42.350271] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:16.886 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.886 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:16.886 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:16.886 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:16.886 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:16.886 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:16.886 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:16.886 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:16.887 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:16.887 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:16.887 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:16.887 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.887 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:16.887 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.887 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:16.887 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.146 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:17.146 "name": "raid_bdev1", 00:21:17.146 "uuid": "c4c8836c-469e-48e8-b4f8-df2a6313e263", 00:21:17.146 "strip_size_kb": 0, 00:21:17.146 "state": "online", 00:21:17.146 "raid_level": "raid1", 00:21:17.146 "superblock": true, 00:21:17.146 "num_base_bdevs": 2, 00:21:17.146 "num_base_bdevs_discovered": 2, 00:21:17.146 "num_base_bdevs_operational": 2, 00:21:17.146 "base_bdevs_list": [ 00:21:17.146 { 00:21:17.146 "name": "spare", 00:21:17.146 "uuid": "da2c6958-fdbd-543b-8b10-0488a5ae9dff", 00:21:17.146 "is_configured": true, 00:21:17.146 "data_offset": 256, 00:21:17.146 "data_size": 7936 00:21:17.146 }, 00:21:17.146 { 00:21:17.146 "name": "BaseBdev2", 00:21:17.146 "uuid": "c3efc178-2941-50e6-a019-70b339a6a21a", 00:21:17.146 "is_configured": true, 00:21:17.146 "data_offset": 256, 00:21:17.146 "data_size": 7936 00:21:17.146 } 00:21:17.146 ] 00:21:17.146 }' 00:21:17.146 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:17.146 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:17.405 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:17.405 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:17.405 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:17.405 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:17.405 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:17.405 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.405 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.405 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:17.405 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.405 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.405 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:17.405 "name": "raid_bdev1", 00:21:17.405 "uuid": "c4c8836c-469e-48e8-b4f8-df2a6313e263", 00:21:17.405 "strip_size_kb": 0, 00:21:17.405 "state": "online", 00:21:17.405 "raid_level": "raid1", 00:21:17.405 "superblock": true, 00:21:17.405 "num_base_bdevs": 2, 00:21:17.405 "num_base_bdevs_discovered": 2, 00:21:17.405 "num_base_bdevs_operational": 2, 00:21:17.405 "base_bdevs_list": [ 00:21:17.405 { 00:21:17.405 "name": "spare", 00:21:17.405 "uuid": "da2c6958-fdbd-543b-8b10-0488a5ae9dff", 00:21:17.405 "is_configured": true, 00:21:17.405 "data_offset": 256, 00:21:17.405 "data_size": 7936 00:21:17.405 }, 00:21:17.405 { 00:21:17.405 "name": "BaseBdev2", 00:21:17.405 "uuid": "c3efc178-2941-50e6-a019-70b339a6a21a", 00:21:17.405 "is_configured": true, 00:21:17.405 "data_offset": 256, 00:21:17.405 "data_size": 7936 00:21:17.405 } 00:21:17.405 ] 00:21:17.405 }' 00:21:17.405 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:17.663 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:17.663 18:19:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:17.664 18:19:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:17.664 18:19:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.664 18:19:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:17.664 18:19:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.664 18:19:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:17.664 18:19:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.664 18:19:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:21:17.664 18:19:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:17.664 18:19:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.664 18:19:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:17.664 [2024-12-06 18:19:43.054451] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:17.664 18:19:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.664 18:19:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:17.664 18:19:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:17.664 18:19:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:17.664 18:19:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:17.664 18:19:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:17.664 18:19:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:17.664 18:19:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:17.664 18:19:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:17.664 18:19:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:17.664 18:19:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:17.664 18:19:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.664 18:19:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.664 18:19:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:17.664 18:19:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.664 18:19:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.664 18:19:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:17.664 "name": "raid_bdev1", 00:21:17.664 "uuid": "c4c8836c-469e-48e8-b4f8-df2a6313e263", 00:21:17.664 "strip_size_kb": 0, 00:21:17.664 "state": "online", 00:21:17.664 "raid_level": "raid1", 00:21:17.664 "superblock": true, 00:21:17.664 "num_base_bdevs": 2, 00:21:17.664 "num_base_bdevs_discovered": 1, 00:21:17.664 "num_base_bdevs_operational": 1, 00:21:17.664 "base_bdevs_list": [ 00:21:17.664 { 00:21:17.664 "name": null, 00:21:17.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.664 "is_configured": false, 00:21:17.664 "data_offset": 0, 00:21:17.664 "data_size": 7936 00:21:17.664 }, 00:21:17.664 { 00:21:17.664 "name": "BaseBdev2", 00:21:17.664 "uuid": "c3efc178-2941-50e6-a019-70b339a6a21a", 00:21:17.664 "is_configured": true, 00:21:17.664 "data_offset": 256, 00:21:17.664 "data_size": 7936 00:21:17.664 } 00:21:17.664 ] 00:21:17.664 }' 00:21:17.664 18:19:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:17.664 18:19:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:18.231 18:19:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:18.231 18:19:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.231 18:19:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:18.231 [2024-12-06 18:19:43.578627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:18.231 [2024-12-06 18:19:43.579074] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:18.231 [2024-12-06 18:19:43.579224] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:18.231 [2024-12-06 18:19:43.579281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:18.231 [2024-12-06 18:19:43.594790] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:21:18.231 18:19:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.231 18:19:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:21:18.231 [2024-12-06 18:19:43.597185] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:19.168 18:19:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:19.168 18:19:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:19.168 18:19:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:19.168 18:19:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:19.168 18:19:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:19.168 18:19:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:19.168 18:19:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.168 18:19:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.168 18:19:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:19.168 18:19:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.168 18:19:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:19.168 "name": "raid_bdev1", 00:21:19.168 "uuid": "c4c8836c-469e-48e8-b4f8-df2a6313e263", 00:21:19.168 "strip_size_kb": 0, 00:21:19.168 "state": "online", 00:21:19.168 "raid_level": "raid1", 00:21:19.169 "superblock": true, 00:21:19.169 "num_base_bdevs": 2, 00:21:19.169 "num_base_bdevs_discovered": 2, 00:21:19.169 "num_base_bdevs_operational": 2, 00:21:19.169 "process": { 00:21:19.169 "type": "rebuild", 00:21:19.169 "target": "spare", 00:21:19.169 "progress": { 00:21:19.169 "blocks": 2560, 00:21:19.169 "percent": 32 00:21:19.169 } 00:21:19.169 }, 00:21:19.169 "base_bdevs_list": [ 00:21:19.169 { 00:21:19.169 "name": "spare", 00:21:19.169 "uuid": "da2c6958-fdbd-543b-8b10-0488a5ae9dff", 00:21:19.169 "is_configured": true, 00:21:19.169 "data_offset": 256, 00:21:19.169 "data_size": 7936 00:21:19.169 }, 00:21:19.169 { 00:21:19.169 "name": "BaseBdev2", 00:21:19.169 "uuid": "c3efc178-2941-50e6-a019-70b339a6a21a", 00:21:19.169 "is_configured": true, 00:21:19.169 "data_offset": 256, 00:21:19.169 "data_size": 7936 00:21:19.169 } 00:21:19.169 ] 00:21:19.169 }' 00:21:19.169 18:19:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:19.492 18:19:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:19.492 18:19:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:19.492 18:19:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:19.492 18:19:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:21:19.492 18:19:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.492 18:19:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:19.492 [2024-12-06 18:19:44.754939] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:19.492 [2024-12-06 18:19:44.805964] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:19.492 [2024-12-06 18:19:44.806048] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:19.492 [2024-12-06 18:19:44.806072] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:19.492 [2024-12-06 18:19:44.806086] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:19.492 18:19:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.492 18:19:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:19.492 18:19:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:19.492 18:19:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:19.492 18:19:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:19.492 18:19:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:19.492 18:19:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:19.492 18:19:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:19.492 18:19:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:19.492 18:19:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:19.492 18:19:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:19.492 18:19:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.492 18:19:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.492 18:19:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:19.492 18:19:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:19.492 18:19:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.492 18:19:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:19.492 "name": "raid_bdev1", 00:21:19.492 "uuid": "c4c8836c-469e-48e8-b4f8-df2a6313e263", 00:21:19.492 "strip_size_kb": 0, 00:21:19.492 "state": "online", 00:21:19.492 "raid_level": "raid1", 00:21:19.492 "superblock": true, 00:21:19.492 "num_base_bdevs": 2, 00:21:19.492 "num_base_bdevs_discovered": 1, 00:21:19.492 "num_base_bdevs_operational": 1, 00:21:19.492 "base_bdevs_list": [ 00:21:19.492 { 00:21:19.492 "name": null, 00:21:19.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.492 "is_configured": false, 00:21:19.492 "data_offset": 0, 00:21:19.492 "data_size": 7936 00:21:19.492 }, 00:21:19.492 { 00:21:19.492 "name": "BaseBdev2", 00:21:19.492 "uuid": "c3efc178-2941-50e6-a019-70b339a6a21a", 00:21:19.492 "is_configured": true, 00:21:19.492 "data_offset": 256, 00:21:19.492 "data_size": 7936 00:21:19.492 } 00:21:19.492 ] 00:21:19.492 }' 00:21:19.492 18:19:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:19.492 18:19:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:20.076 18:19:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:20.076 18:19:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.076 18:19:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:20.076 [2024-12-06 18:19:45.361531] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:20.076 [2024-12-06 18:19:45.361743] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:20.076 [2024-12-06 18:19:45.361831] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:21:20.076 [2024-12-06 18:19:45.361961] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:20.076 [2024-12-06 18:19:45.362551] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:20.076 [2024-12-06 18:19:45.362600] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:20.076 [2024-12-06 18:19:45.362749] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:20.076 [2024-12-06 18:19:45.362800] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:20.076 [2024-12-06 18:19:45.362817] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:20.076 [2024-12-06 18:19:45.362850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:20.076 [2024-12-06 18:19:45.378191] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:21:20.076 spare 00:21:20.076 18:19:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.076 18:19:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:21:20.076 [2024-12-06 18:19:45.380806] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:21.012 18:19:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:21.012 18:19:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:21.012 18:19:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:21.012 18:19:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:21.012 18:19:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:21.012 18:19:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.012 18:19:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.012 18:19:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.012 18:19:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:21.012 18:19:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.012 18:19:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:21.012 "name": "raid_bdev1", 00:21:21.012 "uuid": "c4c8836c-469e-48e8-b4f8-df2a6313e263", 00:21:21.012 "strip_size_kb": 0, 00:21:21.012 "state": "online", 00:21:21.012 "raid_level": "raid1", 00:21:21.012 "superblock": true, 00:21:21.012 "num_base_bdevs": 2, 00:21:21.012 "num_base_bdevs_discovered": 2, 00:21:21.012 "num_base_bdevs_operational": 2, 00:21:21.012 "process": { 00:21:21.012 "type": "rebuild", 00:21:21.012 "target": "spare", 00:21:21.012 "progress": { 00:21:21.012 "blocks": 2560, 00:21:21.012 "percent": 32 00:21:21.012 } 00:21:21.012 }, 00:21:21.012 "base_bdevs_list": [ 00:21:21.012 { 00:21:21.012 "name": "spare", 00:21:21.012 "uuid": "da2c6958-fdbd-543b-8b10-0488a5ae9dff", 00:21:21.012 "is_configured": true, 00:21:21.012 "data_offset": 256, 00:21:21.012 "data_size": 7936 00:21:21.012 }, 00:21:21.012 { 00:21:21.012 "name": "BaseBdev2", 00:21:21.012 "uuid": "c3efc178-2941-50e6-a019-70b339a6a21a", 00:21:21.012 "is_configured": true, 00:21:21.012 "data_offset": 256, 00:21:21.012 "data_size": 7936 00:21:21.012 } 00:21:21.012 ] 00:21:21.012 }' 00:21:21.012 18:19:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:21.012 18:19:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:21.012 18:19:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:21.012 18:19:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:21.012 18:19:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:21:21.012 18:19:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.012 18:19:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:21.271 [2024-12-06 18:19:46.530206] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:21.271 [2024-12-06 18:19:46.589383] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:21.271 [2024-12-06 18:19:46.589461] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:21.271 [2024-12-06 18:19:46.589488] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:21.271 [2024-12-06 18:19:46.589499] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:21.271 18:19:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.271 18:19:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:21.271 18:19:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:21.271 18:19:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:21.271 18:19:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:21.271 18:19:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:21.271 18:19:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:21.271 18:19:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:21.271 18:19:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:21.271 18:19:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:21.271 18:19:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:21.271 18:19:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.271 18:19:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.271 18:19:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.271 18:19:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:21.271 18:19:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.271 18:19:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:21.271 "name": "raid_bdev1", 00:21:21.271 "uuid": "c4c8836c-469e-48e8-b4f8-df2a6313e263", 00:21:21.271 "strip_size_kb": 0, 00:21:21.271 "state": "online", 00:21:21.271 "raid_level": "raid1", 00:21:21.271 "superblock": true, 00:21:21.271 "num_base_bdevs": 2, 00:21:21.271 "num_base_bdevs_discovered": 1, 00:21:21.271 "num_base_bdevs_operational": 1, 00:21:21.271 "base_bdevs_list": [ 00:21:21.271 { 00:21:21.271 "name": null, 00:21:21.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.271 "is_configured": false, 00:21:21.271 "data_offset": 0, 00:21:21.271 "data_size": 7936 00:21:21.271 }, 00:21:21.271 { 00:21:21.271 "name": "BaseBdev2", 00:21:21.271 "uuid": "c3efc178-2941-50e6-a019-70b339a6a21a", 00:21:21.271 "is_configured": true, 00:21:21.271 "data_offset": 256, 00:21:21.271 "data_size": 7936 00:21:21.271 } 00:21:21.271 ] 00:21:21.271 }' 00:21:21.271 18:19:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:21.271 18:19:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:21.839 18:19:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:21.839 18:19:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:21.839 18:19:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:21.839 18:19:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:21.839 18:19:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:21.839 18:19:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.839 18:19:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.839 18:19:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.839 18:19:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:21.839 18:19:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.839 18:19:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:21.839 "name": "raid_bdev1", 00:21:21.839 "uuid": "c4c8836c-469e-48e8-b4f8-df2a6313e263", 00:21:21.839 "strip_size_kb": 0, 00:21:21.839 "state": "online", 00:21:21.839 "raid_level": "raid1", 00:21:21.839 "superblock": true, 00:21:21.839 "num_base_bdevs": 2, 00:21:21.839 "num_base_bdevs_discovered": 1, 00:21:21.839 "num_base_bdevs_operational": 1, 00:21:21.839 "base_bdevs_list": [ 00:21:21.839 { 00:21:21.839 "name": null, 00:21:21.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.839 "is_configured": false, 00:21:21.839 "data_offset": 0, 00:21:21.839 "data_size": 7936 00:21:21.839 }, 00:21:21.839 { 00:21:21.839 "name": "BaseBdev2", 00:21:21.839 "uuid": "c3efc178-2941-50e6-a019-70b339a6a21a", 00:21:21.839 "is_configured": true, 00:21:21.839 "data_offset": 256, 00:21:21.839 "data_size": 7936 00:21:21.839 } 00:21:21.839 ] 00:21:21.839 }' 00:21:21.839 18:19:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:21.839 18:19:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:21.839 18:19:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:21.839 18:19:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:21.839 18:19:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:21:21.839 18:19:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.839 18:19:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:21.839 18:19:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.839 18:19:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:21.839 18:19:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.839 18:19:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:21.839 [2024-12-06 18:19:47.272678] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:21.839 [2024-12-06 18:19:47.272743] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:21.839 [2024-12-06 18:19:47.272793] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:21:21.839 [2024-12-06 18:19:47.272822] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:21.839 [2024-12-06 18:19:47.273373] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:21.839 [2024-12-06 18:19:47.273406] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:21.839 [2024-12-06 18:19:47.273507] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:21.839 [2024-12-06 18:19:47.273528] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:21.839 [2024-12-06 18:19:47.273547] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:21.839 [2024-12-06 18:19:47.273560] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:21:21.839 BaseBdev1 00:21:21.839 18:19:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.839 18:19:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:21:22.774 18:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:22.774 18:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:22.774 18:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:22.774 18:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:22.774 18:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:22.774 18:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:22.774 18:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:22.774 18:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:22.774 18:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:22.774 18:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:22.774 18:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:22.774 18:19:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.774 18:19:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:22.774 18:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.032 18:19:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.032 18:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:23.032 "name": "raid_bdev1", 00:21:23.032 "uuid": "c4c8836c-469e-48e8-b4f8-df2a6313e263", 00:21:23.032 "strip_size_kb": 0, 00:21:23.032 "state": "online", 00:21:23.032 "raid_level": "raid1", 00:21:23.032 "superblock": true, 00:21:23.032 "num_base_bdevs": 2, 00:21:23.032 "num_base_bdevs_discovered": 1, 00:21:23.032 "num_base_bdevs_operational": 1, 00:21:23.032 "base_bdevs_list": [ 00:21:23.032 { 00:21:23.032 "name": null, 00:21:23.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:23.032 "is_configured": false, 00:21:23.032 "data_offset": 0, 00:21:23.032 "data_size": 7936 00:21:23.032 }, 00:21:23.032 { 00:21:23.032 "name": "BaseBdev2", 00:21:23.032 "uuid": "c3efc178-2941-50e6-a019-70b339a6a21a", 00:21:23.032 "is_configured": true, 00:21:23.032 "data_offset": 256, 00:21:23.032 "data_size": 7936 00:21:23.032 } 00:21:23.032 ] 00:21:23.032 }' 00:21:23.032 18:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:23.032 18:19:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:23.291 18:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:23.291 18:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:23.291 18:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:23.291 18:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:23.291 18:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:23.291 18:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.291 18:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.291 18:19:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.291 18:19:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:23.291 18:19:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.549 18:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:23.549 "name": "raid_bdev1", 00:21:23.549 "uuid": "c4c8836c-469e-48e8-b4f8-df2a6313e263", 00:21:23.549 "strip_size_kb": 0, 00:21:23.549 "state": "online", 00:21:23.549 "raid_level": "raid1", 00:21:23.549 "superblock": true, 00:21:23.549 "num_base_bdevs": 2, 00:21:23.549 "num_base_bdevs_discovered": 1, 00:21:23.549 "num_base_bdevs_operational": 1, 00:21:23.549 "base_bdevs_list": [ 00:21:23.549 { 00:21:23.549 "name": null, 00:21:23.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:23.549 "is_configured": false, 00:21:23.549 "data_offset": 0, 00:21:23.549 "data_size": 7936 00:21:23.549 }, 00:21:23.549 { 00:21:23.549 "name": "BaseBdev2", 00:21:23.549 "uuid": "c3efc178-2941-50e6-a019-70b339a6a21a", 00:21:23.549 "is_configured": true, 00:21:23.549 "data_offset": 256, 00:21:23.549 "data_size": 7936 00:21:23.549 } 00:21:23.549 ] 00:21:23.549 }' 00:21:23.549 18:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:23.549 18:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:23.549 18:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:23.549 18:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:23.549 18:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:23.549 18:19:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:21:23.549 18:19:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:23.549 18:19:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:23.549 18:19:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:23.549 18:19:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:23.549 18:19:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:23.549 18:19:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:23.549 18:19:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.549 18:19:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:23.549 [2024-12-06 18:19:48.941326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:23.549 [2024-12-06 18:19:48.941537] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:23.549 [2024-12-06 18:19:48.941573] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:23.549 request: 00:21:23.549 { 00:21:23.549 "base_bdev": "BaseBdev1", 00:21:23.549 "raid_bdev": "raid_bdev1", 00:21:23.549 "method": "bdev_raid_add_base_bdev", 00:21:23.549 "req_id": 1 00:21:23.549 } 00:21:23.549 Got JSON-RPC error response 00:21:23.549 response: 00:21:23.549 { 00:21:23.549 "code": -22, 00:21:23.549 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:23.549 } 00:21:23.549 18:19:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:23.549 18:19:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:21:23.549 18:19:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:23.549 18:19:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:23.549 18:19:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:23.549 18:19:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:21:24.481 18:19:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:24.481 18:19:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:24.481 18:19:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:24.481 18:19:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:24.481 18:19:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:24.481 18:19:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:24.481 18:19:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:24.481 18:19:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:24.481 18:19:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:24.481 18:19:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:24.481 18:19:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:24.481 18:19:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.481 18:19:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:24.481 18:19:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.481 18:19:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.739 18:19:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:24.739 "name": "raid_bdev1", 00:21:24.739 "uuid": "c4c8836c-469e-48e8-b4f8-df2a6313e263", 00:21:24.739 "strip_size_kb": 0, 00:21:24.739 "state": "online", 00:21:24.739 "raid_level": "raid1", 00:21:24.739 "superblock": true, 00:21:24.739 "num_base_bdevs": 2, 00:21:24.739 "num_base_bdevs_discovered": 1, 00:21:24.739 "num_base_bdevs_operational": 1, 00:21:24.739 "base_bdevs_list": [ 00:21:24.739 { 00:21:24.739 "name": null, 00:21:24.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.739 "is_configured": false, 00:21:24.739 "data_offset": 0, 00:21:24.739 "data_size": 7936 00:21:24.739 }, 00:21:24.739 { 00:21:24.739 "name": "BaseBdev2", 00:21:24.739 "uuid": "c3efc178-2941-50e6-a019-70b339a6a21a", 00:21:24.739 "is_configured": true, 00:21:24.739 "data_offset": 256, 00:21:24.739 "data_size": 7936 00:21:24.739 } 00:21:24.739 ] 00:21:24.739 }' 00:21:24.739 18:19:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:24.739 18:19:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:24.997 18:19:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:24.997 18:19:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:24.997 18:19:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:24.997 18:19:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:24.997 18:19:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:24.997 18:19:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:24.997 18:19:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.997 18:19:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.997 18:19:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:25.272 18:19:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.272 18:19:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:25.272 "name": "raid_bdev1", 00:21:25.272 "uuid": "c4c8836c-469e-48e8-b4f8-df2a6313e263", 00:21:25.272 "strip_size_kb": 0, 00:21:25.272 "state": "online", 00:21:25.272 "raid_level": "raid1", 00:21:25.272 "superblock": true, 00:21:25.272 "num_base_bdevs": 2, 00:21:25.272 "num_base_bdevs_discovered": 1, 00:21:25.272 "num_base_bdevs_operational": 1, 00:21:25.272 "base_bdevs_list": [ 00:21:25.272 { 00:21:25.272 "name": null, 00:21:25.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.272 "is_configured": false, 00:21:25.272 "data_offset": 0, 00:21:25.272 "data_size": 7936 00:21:25.272 }, 00:21:25.272 { 00:21:25.272 "name": "BaseBdev2", 00:21:25.272 "uuid": "c3efc178-2941-50e6-a019-70b339a6a21a", 00:21:25.272 "is_configured": true, 00:21:25.272 "data_offset": 256, 00:21:25.272 "data_size": 7936 00:21:25.272 } 00:21:25.272 ] 00:21:25.272 }' 00:21:25.272 18:19:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:25.272 18:19:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:25.272 18:19:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:25.272 18:19:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:25.272 18:19:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86977 00:21:25.272 18:19:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86977 ']' 00:21:25.272 18:19:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86977 00:21:25.272 18:19:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:21:25.272 18:19:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:25.272 18:19:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86977 00:21:25.272 18:19:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:25.272 18:19:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:25.272 killing process with pid 86977 00:21:25.272 18:19:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86977' 00:21:25.272 18:19:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86977 00:21:25.272 Received shutdown signal, test time was about 60.000000 seconds 00:21:25.272 00:21:25.272 Latency(us) 00:21:25.272 [2024-12-06T18:19:50.792Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.272 [2024-12-06T18:19:50.792Z] =================================================================================================================== 00:21:25.272 [2024-12-06T18:19:50.792Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:25.272 [2024-12-06 18:19:50.686496] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:25.272 18:19:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86977 00:21:25.272 [2024-12-06 18:19:50.686660] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:25.272 [2024-12-06 18:19:50.686764] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:25.272 [2024-12-06 18:19:50.686815] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:21:25.557 [2024-12-06 18:19:50.946570] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:26.493 18:19:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:21:26.493 00:21:26.493 real 0m21.700s 00:21:26.493 user 0m29.565s 00:21:26.493 sys 0m2.441s 00:21:26.493 18:19:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:26.493 18:19:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:26.493 ************************************ 00:21:26.493 END TEST raid_rebuild_test_sb_4k 00:21:26.493 ************************************ 00:21:26.753 18:19:52 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:21:26.753 18:19:52 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:21:26.753 18:19:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:26.753 18:19:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:26.753 18:19:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:26.753 ************************************ 00:21:26.753 START TEST raid_state_function_test_sb_md_separate 00:21:26.753 ************************************ 00:21:26.753 18:19:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:21:26.753 18:19:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:21:26.753 18:19:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:21:26.753 18:19:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:21:26.753 18:19:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:26.753 18:19:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:26.753 18:19:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:26.753 18:19:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:26.753 18:19:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:26.753 18:19:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:26.753 18:19:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:26.753 18:19:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:26.753 18:19:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:26.753 18:19:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:26.753 18:19:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:26.753 18:19:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:26.753 18:19:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:26.753 18:19:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:26.753 18:19:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:26.753 18:19:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:21:26.753 18:19:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:21:26.753 18:19:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:21:26.753 18:19:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:21:26.753 18:19:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87676 00:21:26.753 18:19:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:26.753 Process raid pid: 87676 00:21:26.753 18:19:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87676' 00:21:26.753 18:19:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87676 00:21:26.753 18:19:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87676 ']' 00:21:26.753 18:19:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:26.753 18:19:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:26.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:26.753 18:19:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:26.753 18:19:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:26.753 18:19:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:26.753 [2024-12-06 18:19:52.140060] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:21:26.753 [2024-12-06 18:19:52.140201] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:27.012 [2024-12-06 18:19:52.315832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.012 [2024-12-06 18:19:52.447590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:27.271 [2024-12-06 18:19:52.652414] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:27.271 [2024-12-06 18:19:52.652486] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:27.838 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:27.838 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:21:27.838 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:27.838 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.838 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:27.838 [2024-12-06 18:19:53.123923] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:27.838 [2024-12-06 18:19:53.123989] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:27.838 [2024-12-06 18:19:53.124007] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:27.838 [2024-12-06 18:19:53.124022] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:27.838 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.838 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:27.838 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:27.838 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:27.838 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:27.838 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:27.838 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:27.838 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:27.838 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:27.838 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:27.838 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:27.838 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:27.838 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.838 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:27.838 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:27.838 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.838 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:27.838 "name": "Existed_Raid", 00:21:27.838 "uuid": "41a2aabc-d8b6-42b2-8c1b-d255780cebd5", 00:21:27.838 "strip_size_kb": 0, 00:21:27.838 "state": "configuring", 00:21:27.838 "raid_level": "raid1", 00:21:27.838 "superblock": true, 00:21:27.838 "num_base_bdevs": 2, 00:21:27.838 "num_base_bdevs_discovered": 0, 00:21:27.838 "num_base_bdevs_operational": 2, 00:21:27.838 "base_bdevs_list": [ 00:21:27.838 { 00:21:27.838 "name": "BaseBdev1", 00:21:27.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.838 "is_configured": false, 00:21:27.838 "data_offset": 0, 00:21:27.838 "data_size": 0 00:21:27.838 }, 00:21:27.838 { 00:21:27.838 "name": "BaseBdev2", 00:21:27.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.838 "is_configured": false, 00:21:27.838 "data_offset": 0, 00:21:27.838 "data_size": 0 00:21:27.838 } 00:21:27.838 ] 00:21:27.838 }' 00:21:27.838 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:27.838 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.406 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:28.406 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.406 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.406 [2024-12-06 18:19:53.663982] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:28.406 [2024-12-06 18:19:53.664043] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:28.406 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.406 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:28.406 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.406 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.406 [2024-12-06 18:19:53.671936] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:28.406 [2024-12-06 18:19:53.671996] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:28.406 [2024-12-06 18:19:53.672008] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:28.406 [2024-12-06 18:19:53.672025] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:28.406 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.406 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:21:28.406 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.406 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.406 [2024-12-06 18:19:53.714893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:28.406 BaseBdev1 00:21:28.406 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.406 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:28.406 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:28.406 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:28.406 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:21:28.406 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:28.406 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:28.406 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:28.406 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.406 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.406 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.406 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:28.406 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.406 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.406 [ 00:21:28.406 { 00:21:28.407 "name": "BaseBdev1", 00:21:28.407 "aliases": [ 00:21:28.407 "9f5b7514-8dca-4d6e-adf0-1e04ec348a04" 00:21:28.407 ], 00:21:28.407 "product_name": "Malloc disk", 00:21:28.407 "block_size": 4096, 00:21:28.407 "num_blocks": 8192, 00:21:28.407 "uuid": "9f5b7514-8dca-4d6e-adf0-1e04ec348a04", 00:21:28.407 "md_size": 32, 00:21:28.407 "md_interleave": false, 00:21:28.407 "dif_type": 0, 00:21:28.407 "assigned_rate_limits": { 00:21:28.407 "rw_ios_per_sec": 0, 00:21:28.407 "rw_mbytes_per_sec": 0, 00:21:28.407 "r_mbytes_per_sec": 0, 00:21:28.407 "w_mbytes_per_sec": 0 00:21:28.407 }, 00:21:28.407 "claimed": true, 00:21:28.407 "claim_type": "exclusive_write", 00:21:28.407 "zoned": false, 00:21:28.407 "supported_io_types": { 00:21:28.407 "read": true, 00:21:28.407 "write": true, 00:21:28.407 "unmap": true, 00:21:28.407 "flush": true, 00:21:28.407 "reset": true, 00:21:28.407 "nvme_admin": false, 00:21:28.407 "nvme_io": false, 00:21:28.407 "nvme_io_md": false, 00:21:28.407 "write_zeroes": true, 00:21:28.407 "zcopy": true, 00:21:28.407 "get_zone_info": false, 00:21:28.407 "zone_management": false, 00:21:28.407 "zone_append": false, 00:21:28.407 "compare": false, 00:21:28.407 "compare_and_write": false, 00:21:28.407 "abort": true, 00:21:28.407 "seek_hole": false, 00:21:28.407 "seek_data": false, 00:21:28.407 "copy": true, 00:21:28.407 "nvme_iov_md": false 00:21:28.407 }, 00:21:28.407 "memory_domains": [ 00:21:28.407 { 00:21:28.407 "dma_device_id": "system", 00:21:28.407 "dma_device_type": 1 00:21:28.407 }, 00:21:28.407 { 00:21:28.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:28.407 "dma_device_type": 2 00:21:28.407 } 00:21:28.407 ], 00:21:28.407 "driver_specific": {} 00:21:28.407 } 00:21:28.407 ] 00:21:28.407 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.407 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:21:28.407 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:28.407 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:28.407 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:28.407 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:28.407 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:28.407 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:28.407 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:28.407 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:28.407 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:28.407 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:28.407 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:28.407 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.407 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:28.407 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.407 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.407 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:28.407 "name": "Existed_Raid", 00:21:28.407 "uuid": "748c2dbb-c3d6-4bd8-8942-fecdb48d582e", 00:21:28.407 "strip_size_kb": 0, 00:21:28.407 "state": "configuring", 00:21:28.407 "raid_level": "raid1", 00:21:28.407 "superblock": true, 00:21:28.407 "num_base_bdevs": 2, 00:21:28.407 "num_base_bdevs_discovered": 1, 00:21:28.407 "num_base_bdevs_operational": 2, 00:21:28.407 "base_bdevs_list": [ 00:21:28.407 { 00:21:28.407 "name": "BaseBdev1", 00:21:28.407 "uuid": "9f5b7514-8dca-4d6e-adf0-1e04ec348a04", 00:21:28.407 "is_configured": true, 00:21:28.407 "data_offset": 256, 00:21:28.407 "data_size": 7936 00:21:28.407 }, 00:21:28.407 { 00:21:28.407 "name": "BaseBdev2", 00:21:28.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.407 "is_configured": false, 00:21:28.407 "data_offset": 0, 00:21:28.407 "data_size": 0 00:21:28.407 } 00:21:28.407 ] 00:21:28.407 }' 00:21:28.407 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:28.407 18:19:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.975 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:28.975 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.975 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.975 [2024-12-06 18:19:54.259185] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:28.975 [2024-12-06 18:19:54.259260] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:28.975 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.975 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:28.975 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.975 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.975 [2024-12-06 18:19:54.267172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:28.975 [2024-12-06 18:19:54.269581] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:28.975 [2024-12-06 18:19:54.269641] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:28.975 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.975 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:28.975 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:28.975 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:28.975 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:28.975 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:28.975 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:28.975 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:28.975 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:28.975 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:28.975 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:28.975 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:28.975 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:28.975 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:28.975 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:28.975 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.975 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.975 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.975 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:28.975 "name": "Existed_Raid", 00:21:28.975 "uuid": "4d93c8e4-f51d-4ecf-9a14-8212d158c24d", 00:21:28.975 "strip_size_kb": 0, 00:21:28.975 "state": "configuring", 00:21:28.975 "raid_level": "raid1", 00:21:28.975 "superblock": true, 00:21:28.975 "num_base_bdevs": 2, 00:21:28.975 "num_base_bdevs_discovered": 1, 00:21:28.975 "num_base_bdevs_operational": 2, 00:21:28.975 "base_bdevs_list": [ 00:21:28.975 { 00:21:28.975 "name": "BaseBdev1", 00:21:28.975 "uuid": "9f5b7514-8dca-4d6e-adf0-1e04ec348a04", 00:21:28.975 "is_configured": true, 00:21:28.975 "data_offset": 256, 00:21:28.975 "data_size": 7936 00:21:28.975 }, 00:21:28.975 { 00:21:28.975 "name": "BaseBdev2", 00:21:28.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.975 "is_configured": false, 00:21:28.975 "data_offset": 0, 00:21:28.975 "data_size": 0 00:21:28.975 } 00:21:28.975 ] 00:21:28.975 }' 00:21:28.975 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:28.975 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:29.543 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:21:29.543 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.543 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:29.543 [2024-12-06 18:19:54.811166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:29.543 [2024-12-06 18:19:54.811490] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:29.543 [2024-12-06 18:19:54.811544] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:29.543 [2024-12-06 18:19:54.811637] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:29.543 [2024-12-06 18:19:54.811819] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:29.543 [2024-12-06 18:19:54.811850] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:29.543 BaseBdev2 00:21:29.543 [2024-12-06 18:19:54.811966] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:29.543 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.543 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:29.543 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:29.543 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:29.543 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:21:29.543 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:29.543 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:29.543 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:29.543 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.543 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:29.543 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.543 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:29.543 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.543 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:29.543 [ 00:21:29.543 { 00:21:29.543 "name": "BaseBdev2", 00:21:29.543 "aliases": [ 00:21:29.543 "4b2cb312-1ecd-46a5-9d14-133c76753ce3" 00:21:29.543 ], 00:21:29.543 "product_name": "Malloc disk", 00:21:29.543 "block_size": 4096, 00:21:29.543 "num_blocks": 8192, 00:21:29.543 "uuid": "4b2cb312-1ecd-46a5-9d14-133c76753ce3", 00:21:29.543 "md_size": 32, 00:21:29.543 "md_interleave": false, 00:21:29.543 "dif_type": 0, 00:21:29.543 "assigned_rate_limits": { 00:21:29.543 "rw_ios_per_sec": 0, 00:21:29.543 "rw_mbytes_per_sec": 0, 00:21:29.543 "r_mbytes_per_sec": 0, 00:21:29.544 "w_mbytes_per_sec": 0 00:21:29.544 }, 00:21:29.544 "claimed": true, 00:21:29.544 "claim_type": "exclusive_write", 00:21:29.544 "zoned": false, 00:21:29.544 "supported_io_types": { 00:21:29.544 "read": true, 00:21:29.544 "write": true, 00:21:29.544 "unmap": true, 00:21:29.544 "flush": true, 00:21:29.544 "reset": true, 00:21:29.544 "nvme_admin": false, 00:21:29.544 "nvme_io": false, 00:21:29.544 "nvme_io_md": false, 00:21:29.544 "write_zeroes": true, 00:21:29.544 "zcopy": true, 00:21:29.544 "get_zone_info": false, 00:21:29.544 "zone_management": false, 00:21:29.544 "zone_append": false, 00:21:29.544 "compare": false, 00:21:29.544 "compare_and_write": false, 00:21:29.544 "abort": true, 00:21:29.544 "seek_hole": false, 00:21:29.544 "seek_data": false, 00:21:29.544 "copy": true, 00:21:29.544 "nvme_iov_md": false 00:21:29.544 }, 00:21:29.544 "memory_domains": [ 00:21:29.544 { 00:21:29.544 "dma_device_id": "system", 00:21:29.544 "dma_device_type": 1 00:21:29.544 }, 00:21:29.544 { 00:21:29.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:29.544 "dma_device_type": 2 00:21:29.544 } 00:21:29.544 ], 00:21:29.544 "driver_specific": {} 00:21:29.544 } 00:21:29.544 ] 00:21:29.544 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.544 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:21:29.544 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:29.544 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:29.544 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:21:29.544 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:29.544 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:29.544 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:29.544 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:29.544 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:29.544 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:29.544 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:29.544 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:29.544 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:29.544 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:29.544 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:29.544 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.544 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:29.544 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.544 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:29.544 "name": "Existed_Raid", 00:21:29.544 "uuid": "4d93c8e4-f51d-4ecf-9a14-8212d158c24d", 00:21:29.544 "strip_size_kb": 0, 00:21:29.544 "state": "online", 00:21:29.544 "raid_level": "raid1", 00:21:29.544 "superblock": true, 00:21:29.544 "num_base_bdevs": 2, 00:21:29.544 "num_base_bdevs_discovered": 2, 00:21:29.544 "num_base_bdevs_operational": 2, 00:21:29.544 "base_bdevs_list": [ 00:21:29.544 { 00:21:29.544 "name": "BaseBdev1", 00:21:29.544 "uuid": "9f5b7514-8dca-4d6e-adf0-1e04ec348a04", 00:21:29.544 "is_configured": true, 00:21:29.544 "data_offset": 256, 00:21:29.544 "data_size": 7936 00:21:29.544 }, 00:21:29.544 { 00:21:29.544 "name": "BaseBdev2", 00:21:29.544 "uuid": "4b2cb312-1ecd-46a5-9d14-133c76753ce3", 00:21:29.544 "is_configured": true, 00:21:29.544 "data_offset": 256, 00:21:29.544 "data_size": 7936 00:21:29.544 } 00:21:29.544 ] 00:21:29.544 }' 00:21:29.544 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:29.544 18:19:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:30.113 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:30.113 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:30.113 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:30.113 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:30.113 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:21:30.113 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:30.113 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:30.113 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:30.113 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.113 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:30.113 [2024-12-06 18:19:55.375828] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:30.113 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.113 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:30.113 "name": "Existed_Raid", 00:21:30.113 "aliases": [ 00:21:30.113 "4d93c8e4-f51d-4ecf-9a14-8212d158c24d" 00:21:30.113 ], 00:21:30.113 "product_name": "Raid Volume", 00:21:30.113 "block_size": 4096, 00:21:30.113 "num_blocks": 7936, 00:21:30.113 "uuid": "4d93c8e4-f51d-4ecf-9a14-8212d158c24d", 00:21:30.113 "md_size": 32, 00:21:30.113 "md_interleave": false, 00:21:30.113 "dif_type": 0, 00:21:30.113 "assigned_rate_limits": { 00:21:30.113 "rw_ios_per_sec": 0, 00:21:30.113 "rw_mbytes_per_sec": 0, 00:21:30.113 "r_mbytes_per_sec": 0, 00:21:30.113 "w_mbytes_per_sec": 0 00:21:30.113 }, 00:21:30.113 "claimed": false, 00:21:30.113 "zoned": false, 00:21:30.113 "supported_io_types": { 00:21:30.113 "read": true, 00:21:30.113 "write": true, 00:21:30.113 "unmap": false, 00:21:30.113 "flush": false, 00:21:30.113 "reset": true, 00:21:30.113 "nvme_admin": false, 00:21:30.113 "nvme_io": false, 00:21:30.113 "nvme_io_md": false, 00:21:30.113 "write_zeroes": true, 00:21:30.113 "zcopy": false, 00:21:30.113 "get_zone_info": false, 00:21:30.113 "zone_management": false, 00:21:30.113 "zone_append": false, 00:21:30.113 "compare": false, 00:21:30.113 "compare_and_write": false, 00:21:30.113 "abort": false, 00:21:30.113 "seek_hole": false, 00:21:30.113 "seek_data": false, 00:21:30.113 "copy": false, 00:21:30.113 "nvme_iov_md": false 00:21:30.113 }, 00:21:30.113 "memory_domains": [ 00:21:30.113 { 00:21:30.113 "dma_device_id": "system", 00:21:30.113 "dma_device_type": 1 00:21:30.113 }, 00:21:30.113 { 00:21:30.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:30.113 "dma_device_type": 2 00:21:30.113 }, 00:21:30.113 { 00:21:30.113 "dma_device_id": "system", 00:21:30.113 "dma_device_type": 1 00:21:30.113 }, 00:21:30.113 { 00:21:30.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:30.113 "dma_device_type": 2 00:21:30.113 } 00:21:30.113 ], 00:21:30.113 "driver_specific": { 00:21:30.113 "raid": { 00:21:30.113 "uuid": "4d93c8e4-f51d-4ecf-9a14-8212d158c24d", 00:21:30.113 "strip_size_kb": 0, 00:21:30.113 "state": "online", 00:21:30.113 "raid_level": "raid1", 00:21:30.113 "superblock": true, 00:21:30.113 "num_base_bdevs": 2, 00:21:30.113 "num_base_bdevs_discovered": 2, 00:21:30.113 "num_base_bdevs_operational": 2, 00:21:30.113 "base_bdevs_list": [ 00:21:30.113 { 00:21:30.113 "name": "BaseBdev1", 00:21:30.113 "uuid": "9f5b7514-8dca-4d6e-adf0-1e04ec348a04", 00:21:30.113 "is_configured": true, 00:21:30.113 "data_offset": 256, 00:21:30.113 "data_size": 7936 00:21:30.113 }, 00:21:30.113 { 00:21:30.113 "name": "BaseBdev2", 00:21:30.113 "uuid": "4b2cb312-1ecd-46a5-9d14-133c76753ce3", 00:21:30.113 "is_configured": true, 00:21:30.113 "data_offset": 256, 00:21:30.114 "data_size": 7936 00:21:30.114 } 00:21:30.114 ] 00:21:30.114 } 00:21:30.114 } 00:21:30.114 }' 00:21:30.114 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:30.114 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:30.114 BaseBdev2' 00:21:30.114 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:30.114 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:21:30.114 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:30.114 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:30.114 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:30.114 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.114 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:30.114 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.114 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:21:30.114 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:21:30.114 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:30.114 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:30.114 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:30.114 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.114 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:30.114 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.114 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:21:30.114 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:21:30.114 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:30.114 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.114 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:30.114 [2024-12-06 18:19:55.619592] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:30.385 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.385 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:30.385 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:21:30.385 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:30.385 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:21:30.385 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:21:30.385 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:21:30.385 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:30.385 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:30.385 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:30.385 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:30.385 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:30.385 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:30.385 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:30.385 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:30.385 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:30.385 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.385 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.385 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:30.385 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:30.385 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.385 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:30.385 "name": "Existed_Raid", 00:21:30.385 "uuid": "4d93c8e4-f51d-4ecf-9a14-8212d158c24d", 00:21:30.385 "strip_size_kb": 0, 00:21:30.385 "state": "online", 00:21:30.385 "raid_level": "raid1", 00:21:30.385 "superblock": true, 00:21:30.385 "num_base_bdevs": 2, 00:21:30.385 "num_base_bdevs_discovered": 1, 00:21:30.385 "num_base_bdevs_operational": 1, 00:21:30.385 "base_bdevs_list": [ 00:21:30.385 { 00:21:30.385 "name": null, 00:21:30.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.385 "is_configured": false, 00:21:30.385 "data_offset": 0, 00:21:30.385 "data_size": 7936 00:21:30.385 }, 00:21:30.385 { 00:21:30.385 "name": "BaseBdev2", 00:21:30.385 "uuid": "4b2cb312-1ecd-46a5-9d14-133c76753ce3", 00:21:30.385 "is_configured": true, 00:21:30.385 "data_offset": 256, 00:21:30.385 "data_size": 7936 00:21:30.385 } 00:21:30.385 ] 00:21:30.385 }' 00:21:30.385 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:30.385 18:19:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:30.951 18:19:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:30.951 18:19:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:30.951 18:19:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.951 18:19:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:30.951 18:19:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.951 18:19:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:30.951 18:19:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.951 18:19:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:30.951 18:19:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:30.951 18:19:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:30.951 18:19:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.951 18:19:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:30.951 [2024-12-06 18:19:56.309228] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:30.951 [2024-12-06 18:19:56.309361] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:30.951 [2024-12-06 18:19:56.404307] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:30.951 [2024-12-06 18:19:56.404389] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:30.951 [2024-12-06 18:19:56.404409] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:30.951 18:19:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.951 18:19:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:30.951 18:19:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:30.951 18:19:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.951 18:19:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.951 18:19:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:30.951 18:19:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:30.951 18:19:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.952 18:19:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:30.952 18:19:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:30.952 18:19:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:21:30.952 18:19:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87676 00:21:30.952 18:19:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87676 ']' 00:21:30.952 18:19:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87676 00:21:30.952 18:19:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:21:30.952 18:19:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:30.952 18:19:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87676 00:21:31.211 18:19:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:31.211 18:19:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:31.211 killing process with pid 87676 00:21:31.211 18:19:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87676' 00:21:31.211 18:19:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87676 00:21:31.211 [2024-12-06 18:19:56.492264] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:31.211 18:19:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87676 00:21:31.211 [2024-12-06 18:19:56.507692] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:32.147 18:19:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:21:32.147 00:21:32.147 real 0m5.543s 00:21:32.147 user 0m8.378s 00:21:32.147 sys 0m0.767s 00:21:32.147 18:19:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:32.147 18:19:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:32.147 ************************************ 00:21:32.147 END TEST raid_state_function_test_sb_md_separate 00:21:32.147 ************************************ 00:21:32.147 18:19:57 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:21:32.147 18:19:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:32.147 18:19:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:32.147 18:19:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:32.147 ************************************ 00:21:32.147 START TEST raid_superblock_test_md_separate 00:21:32.147 ************************************ 00:21:32.147 18:19:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:21:32.147 18:19:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:21:32.147 18:19:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:21:32.147 18:19:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:21:32.147 18:19:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:21:32.147 18:19:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:21:32.147 18:19:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:21:32.147 18:19:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:21:32.147 18:19:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:21:32.147 18:19:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:21:32.147 18:19:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:21:32.147 18:19:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:21:32.147 18:19:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:21:32.147 18:19:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:21:32.147 18:19:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:21:32.147 18:19:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:21:32.147 18:19:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87934 00:21:32.147 18:19:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:21:32.147 18:19:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87934 00:21:32.147 18:19:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87934 ']' 00:21:32.147 18:19:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:32.148 18:19:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:32.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:32.148 18:19:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:32.148 18:19:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:32.148 18:19:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:32.406 [2024-12-06 18:19:57.756038] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:21:32.406 [2024-12-06 18:19:57.756209] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87934 ] 00:21:32.665 [2024-12-06 18:19:57.947569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.665 [2024-12-06 18:19:58.107985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.923 [2024-12-06 18:19:58.341257] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:32.923 [2024-12-06 18:19:58.341299] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:33.492 18:19:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:33.492 18:19:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:21:33.492 18:19:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:21:33.492 18:19:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:33.492 18:19:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:21:33.492 18:19:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:21:33.492 18:19:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:33.492 18:19:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:33.493 18:19:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:33.493 18:19:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:33.493 18:19:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:21:33.493 18:19:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.493 18:19:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:33.493 malloc1 00:21:33.493 18:19:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.493 18:19:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:33.493 18:19:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.493 18:19:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:33.493 [2024-12-06 18:19:58.792818] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:33.493 [2024-12-06 18:19:58.792882] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:33.493 [2024-12-06 18:19:58.792915] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:33.493 [2024-12-06 18:19:58.792931] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:33.493 [2024-12-06 18:19:58.795550] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:33.493 [2024-12-06 18:19:58.795596] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:33.493 pt1 00:21:33.493 18:19:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.493 18:19:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:33.493 18:19:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:33.493 18:19:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:21:33.493 18:19:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:21:33.493 18:19:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:33.493 18:19:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:33.493 18:19:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:33.493 18:19:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:33.493 18:19:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:21:33.493 18:19:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.493 18:19:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:33.493 malloc2 00:21:33.493 18:19:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.493 18:19:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:33.493 18:19:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.493 18:19:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:33.493 [2024-12-06 18:19:58.851636] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:33.493 [2024-12-06 18:19:58.851727] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:33.493 [2024-12-06 18:19:58.851757] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:33.493 [2024-12-06 18:19:58.851770] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:33.493 [2024-12-06 18:19:58.854327] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:33.493 [2024-12-06 18:19:58.854368] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:33.493 pt2 00:21:33.493 18:19:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.493 18:19:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:33.493 18:19:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:33.493 18:19:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:21:33.493 18:19:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.493 18:19:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:33.493 [2024-12-06 18:19:58.863630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:33.493 [2024-12-06 18:19:58.866274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:33.493 [2024-12-06 18:19:58.866515] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:33.493 [2024-12-06 18:19:58.866539] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:33.493 [2024-12-06 18:19:58.866641] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:33.493 [2024-12-06 18:19:58.866830] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:33.493 [2024-12-06 18:19:58.866861] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:33.493 [2024-12-06 18:19:58.866990] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:33.493 18:19:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.493 18:19:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:33.493 18:19:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:33.493 18:19:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:33.493 18:19:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:33.493 18:19:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:33.493 18:19:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:33.493 18:19:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:33.493 18:19:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:33.493 18:19:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:33.493 18:19:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:33.493 18:19:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.493 18:19:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.493 18:19:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:33.493 18:19:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:33.493 18:19:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.493 18:19:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:33.493 "name": "raid_bdev1", 00:21:33.493 "uuid": "5e6bd1f7-59d9-4a12-8413-cb4d4ef88407", 00:21:33.493 "strip_size_kb": 0, 00:21:33.493 "state": "online", 00:21:33.493 "raid_level": "raid1", 00:21:33.493 "superblock": true, 00:21:33.493 "num_base_bdevs": 2, 00:21:33.493 "num_base_bdevs_discovered": 2, 00:21:33.493 "num_base_bdevs_operational": 2, 00:21:33.493 "base_bdevs_list": [ 00:21:33.493 { 00:21:33.493 "name": "pt1", 00:21:33.493 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:33.493 "is_configured": true, 00:21:33.493 "data_offset": 256, 00:21:33.493 "data_size": 7936 00:21:33.493 }, 00:21:33.493 { 00:21:33.493 "name": "pt2", 00:21:33.493 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:33.493 "is_configured": true, 00:21:33.493 "data_offset": 256, 00:21:33.493 "data_size": 7936 00:21:33.493 } 00:21:33.493 ] 00:21:33.493 }' 00:21:33.493 18:19:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:33.493 18:19:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:34.061 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:21:34.061 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:34.061 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:34.061 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:34.061 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:21:34.061 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:34.062 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:34.062 18:19:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.062 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:34.062 18:19:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:34.062 [2024-12-06 18:19:59.416175] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:34.062 18:19:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.062 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:34.062 "name": "raid_bdev1", 00:21:34.062 "aliases": [ 00:21:34.062 "5e6bd1f7-59d9-4a12-8413-cb4d4ef88407" 00:21:34.062 ], 00:21:34.062 "product_name": "Raid Volume", 00:21:34.062 "block_size": 4096, 00:21:34.062 "num_blocks": 7936, 00:21:34.062 "uuid": "5e6bd1f7-59d9-4a12-8413-cb4d4ef88407", 00:21:34.062 "md_size": 32, 00:21:34.062 "md_interleave": false, 00:21:34.062 "dif_type": 0, 00:21:34.062 "assigned_rate_limits": { 00:21:34.062 "rw_ios_per_sec": 0, 00:21:34.062 "rw_mbytes_per_sec": 0, 00:21:34.062 "r_mbytes_per_sec": 0, 00:21:34.062 "w_mbytes_per_sec": 0 00:21:34.062 }, 00:21:34.062 "claimed": false, 00:21:34.062 "zoned": false, 00:21:34.062 "supported_io_types": { 00:21:34.062 "read": true, 00:21:34.062 "write": true, 00:21:34.062 "unmap": false, 00:21:34.062 "flush": false, 00:21:34.062 "reset": true, 00:21:34.062 "nvme_admin": false, 00:21:34.062 "nvme_io": false, 00:21:34.062 "nvme_io_md": false, 00:21:34.062 "write_zeroes": true, 00:21:34.062 "zcopy": false, 00:21:34.062 "get_zone_info": false, 00:21:34.062 "zone_management": false, 00:21:34.062 "zone_append": false, 00:21:34.062 "compare": false, 00:21:34.062 "compare_and_write": false, 00:21:34.062 "abort": false, 00:21:34.062 "seek_hole": false, 00:21:34.062 "seek_data": false, 00:21:34.062 "copy": false, 00:21:34.062 "nvme_iov_md": false 00:21:34.062 }, 00:21:34.062 "memory_domains": [ 00:21:34.062 { 00:21:34.062 "dma_device_id": "system", 00:21:34.062 "dma_device_type": 1 00:21:34.062 }, 00:21:34.062 { 00:21:34.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:34.062 "dma_device_type": 2 00:21:34.062 }, 00:21:34.062 { 00:21:34.062 "dma_device_id": "system", 00:21:34.062 "dma_device_type": 1 00:21:34.062 }, 00:21:34.062 { 00:21:34.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:34.062 "dma_device_type": 2 00:21:34.062 } 00:21:34.062 ], 00:21:34.062 "driver_specific": { 00:21:34.062 "raid": { 00:21:34.062 "uuid": "5e6bd1f7-59d9-4a12-8413-cb4d4ef88407", 00:21:34.062 "strip_size_kb": 0, 00:21:34.062 "state": "online", 00:21:34.062 "raid_level": "raid1", 00:21:34.062 "superblock": true, 00:21:34.062 "num_base_bdevs": 2, 00:21:34.062 "num_base_bdevs_discovered": 2, 00:21:34.062 "num_base_bdevs_operational": 2, 00:21:34.062 "base_bdevs_list": [ 00:21:34.062 { 00:21:34.062 "name": "pt1", 00:21:34.062 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:34.062 "is_configured": true, 00:21:34.062 "data_offset": 256, 00:21:34.062 "data_size": 7936 00:21:34.062 }, 00:21:34.062 { 00:21:34.062 "name": "pt2", 00:21:34.062 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:34.062 "is_configured": true, 00:21:34.062 "data_offset": 256, 00:21:34.062 "data_size": 7936 00:21:34.062 } 00:21:34.062 ] 00:21:34.062 } 00:21:34.062 } 00:21:34.062 }' 00:21:34.062 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:34.062 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:34.062 pt2' 00:21:34.062 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:34.062 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:21:34.062 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:34.062 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:34.062 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:34.062 18:19:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.062 18:19:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:34.320 18:19:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.320 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:21:34.320 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:21:34.320 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:34.320 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:34.320 18:19:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.320 18:19:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:34.320 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:34.320 18:19:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.320 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:21:34.320 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:21:34.320 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:34.320 18:19:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.320 18:19:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:34.320 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:21:34.320 [2024-12-06 18:19:59.684176] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:34.320 18:19:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.320 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5e6bd1f7-59d9-4a12-8413-cb4d4ef88407 00:21:34.320 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 5e6bd1f7-59d9-4a12-8413-cb4d4ef88407 ']' 00:21:34.320 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:34.320 18:19:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.320 18:19:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:34.320 [2024-12-06 18:19:59.735809] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:34.320 [2024-12-06 18:19:59.735856] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:34.320 [2024-12-06 18:19:59.735950] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:34.320 [2024-12-06 18:19:59.736025] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:34.320 [2024-12-06 18:19:59.736043] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:34.320 18:19:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.320 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:21:34.320 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.320 18:19:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.320 18:19:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:34.320 18:19:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.320 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:21:34.320 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:21:34.320 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:34.320 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:21:34.320 18:19:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.320 18:19:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:34.320 18:19:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.320 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:34.320 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:21:34.320 18:19:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.320 18:19:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:34.320 18:19:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.320 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:21:34.320 18:19:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.320 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:34.320 18:19:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:34.577 18:19:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.577 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:21:34.577 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:34.577 18:19:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:21:34.577 18:19:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:34.577 18:19:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:34.577 18:19:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:34.577 18:19:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:34.577 18:19:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:34.577 18:19:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:34.577 18:19:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.577 18:19:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:34.578 [2024-12-06 18:19:59.875917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:34.578 [2024-12-06 18:19:59.878485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:34.578 [2024-12-06 18:19:59.878591] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:34.578 [2024-12-06 18:19:59.878662] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:34.578 [2024-12-06 18:19:59.878687] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:34.578 [2024-12-06 18:19:59.878702] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:21:34.578 request: 00:21:34.578 { 00:21:34.578 "name": "raid_bdev1", 00:21:34.578 "raid_level": "raid1", 00:21:34.578 "base_bdevs": [ 00:21:34.578 "malloc1", 00:21:34.578 "malloc2" 00:21:34.578 ], 00:21:34.578 "superblock": false, 00:21:34.578 "method": "bdev_raid_create", 00:21:34.578 "req_id": 1 00:21:34.578 } 00:21:34.578 Got JSON-RPC error response 00:21:34.578 response: 00:21:34.578 { 00:21:34.578 "code": -17, 00:21:34.578 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:34.578 } 00:21:34.578 18:19:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:34.578 18:19:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:21:34.578 18:19:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:34.578 18:19:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:34.578 18:19:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:34.578 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.578 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:21:34.578 18:19:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.578 18:19:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:34.578 18:19:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.578 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:21:34.578 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:21:34.578 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:34.578 18:19:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.578 18:19:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:34.578 [2024-12-06 18:19:59.943875] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:34.578 [2024-12-06 18:19:59.944085] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:34.578 [2024-12-06 18:19:59.944154] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:34.578 [2024-12-06 18:19:59.944400] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:34.578 [2024-12-06 18:19:59.947046] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:34.578 [2024-12-06 18:19:59.947285] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:34.578 [2024-12-06 18:19:59.947451] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:34.578 [2024-12-06 18:19:59.947682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:34.578 pt1 00:21:34.578 18:19:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.578 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:34.578 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:34.578 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:34.578 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:34.578 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:34.578 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:34.578 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:34.578 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:34.578 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:34.578 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:34.578 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.578 18:19:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.578 18:19:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:34.578 18:19:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:34.578 18:19:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.578 18:20:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:34.578 "name": "raid_bdev1", 00:21:34.578 "uuid": "5e6bd1f7-59d9-4a12-8413-cb4d4ef88407", 00:21:34.578 "strip_size_kb": 0, 00:21:34.578 "state": "configuring", 00:21:34.578 "raid_level": "raid1", 00:21:34.578 "superblock": true, 00:21:34.578 "num_base_bdevs": 2, 00:21:34.578 "num_base_bdevs_discovered": 1, 00:21:34.578 "num_base_bdevs_operational": 2, 00:21:34.578 "base_bdevs_list": [ 00:21:34.578 { 00:21:34.578 "name": "pt1", 00:21:34.578 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:34.578 "is_configured": true, 00:21:34.578 "data_offset": 256, 00:21:34.578 "data_size": 7936 00:21:34.578 }, 00:21:34.578 { 00:21:34.578 "name": null, 00:21:34.578 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:34.578 "is_configured": false, 00:21:34.578 "data_offset": 256, 00:21:34.578 "data_size": 7936 00:21:34.578 } 00:21:34.578 ] 00:21:34.578 }' 00:21:34.578 18:20:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:34.578 18:20:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:35.201 18:20:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:21:35.201 18:20:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:21:35.201 18:20:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:35.201 18:20:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:35.201 18:20:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.201 18:20:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:35.201 [2024-12-06 18:20:00.472232] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:35.201 [2024-12-06 18:20:00.472360] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:35.201 [2024-12-06 18:20:00.472401] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:35.201 [2024-12-06 18:20:00.472423] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:35.201 [2024-12-06 18:20:00.472710] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:35.201 [2024-12-06 18:20:00.472740] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:35.201 [2024-12-06 18:20:00.472824] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:35.201 [2024-12-06 18:20:00.472861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:35.201 [2024-12-06 18:20:00.472998] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:35.201 [2024-12-06 18:20:00.473019] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:35.201 [2024-12-06 18:20:00.473110] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:35.201 [2024-12-06 18:20:00.473253] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:35.201 [2024-12-06 18:20:00.473267] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:35.201 [2024-12-06 18:20:00.473384] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:35.201 pt2 00:21:35.201 18:20:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.201 18:20:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:35.201 18:20:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:35.201 18:20:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:35.201 18:20:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:35.201 18:20:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:35.201 18:20:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:35.201 18:20:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:35.201 18:20:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:35.201 18:20:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:35.201 18:20:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:35.201 18:20:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:35.201 18:20:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:35.201 18:20:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:35.202 18:20:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:35.202 18:20:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.202 18:20:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:35.202 18:20:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.202 18:20:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:35.202 "name": "raid_bdev1", 00:21:35.202 "uuid": "5e6bd1f7-59d9-4a12-8413-cb4d4ef88407", 00:21:35.202 "strip_size_kb": 0, 00:21:35.202 "state": "online", 00:21:35.202 "raid_level": "raid1", 00:21:35.202 "superblock": true, 00:21:35.202 "num_base_bdevs": 2, 00:21:35.202 "num_base_bdevs_discovered": 2, 00:21:35.202 "num_base_bdevs_operational": 2, 00:21:35.202 "base_bdevs_list": [ 00:21:35.202 { 00:21:35.202 "name": "pt1", 00:21:35.202 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:35.202 "is_configured": true, 00:21:35.202 "data_offset": 256, 00:21:35.202 "data_size": 7936 00:21:35.202 }, 00:21:35.202 { 00:21:35.202 "name": "pt2", 00:21:35.202 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:35.202 "is_configured": true, 00:21:35.202 "data_offset": 256, 00:21:35.202 "data_size": 7936 00:21:35.202 } 00:21:35.202 ] 00:21:35.202 }' 00:21:35.202 18:20:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:35.202 18:20:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:35.769 18:20:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:21:35.769 18:20:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:35.769 18:20:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:35.769 18:20:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:35.769 18:20:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:21:35.769 18:20:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:35.769 18:20:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:35.769 18:20:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.769 18:20:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:35.769 18:20:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:35.769 [2024-12-06 18:20:01.000678] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:35.769 18:20:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.769 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:35.769 "name": "raid_bdev1", 00:21:35.769 "aliases": [ 00:21:35.769 "5e6bd1f7-59d9-4a12-8413-cb4d4ef88407" 00:21:35.769 ], 00:21:35.769 "product_name": "Raid Volume", 00:21:35.769 "block_size": 4096, 00:21:35.769 "num_blocks": 7936, 00:21:35.769 "uuid": "5e6bd1f7-59d9-4a12-8413-cb4d4ef88407", 00:21:35.769 "md_size": 32, 00:21:35.769 "md_interleave": false, 00:21:35.769 "dif_type": 0, 00:21:35.769 "assigned_rate_limits": { 00:21:35.769 "rw_ios_per_sec": 0, 00:21:35.769 "rw_mbytes_per_sec": 0, 00:21:35.769 "r_mbytes_per_sec": 0, 00:21:35.769 "w_mbytes_per_sec": 0 00:21:35.769 }, 00:21:35.769 "claimed": false, 00:21:35.769 "zoned": false, 00:21:35.769 "supported_io_types": { 00:21:35.769 "read": true, 00:21:35.769 "write": true, 00:21:35.769 "unmap": false, 00:21:35.769 "flush": false, 00:21:35.769 "reset": true, 00:21:35.769 "nvme_admin": false, 00:21:35.769 "nvme_io": false, 00:21:35.769 "nvme_io_md": false, 00:21:35.769 "write_zeroes": true, 00:21:35.769 "zcopy": false, 00:21:35.769 "get_zone_info": false, 00:21:35.769 "zone_management": false, 00:21:35.769 "zone_append": false, 00:21:35.769 "compare": false, 00:21:35.769 "compare_and_write": false, 00:21:35.769 "abort": false, 00:21:35.769 "seek_hole": false, 00:21:35.769 "seek_data": false, 00:21:35.769 "copy": false, 00:21:35.769 "nvme_iov_md": false 00:21:35.769 }, 00:21:35.769 "memory_domains": [ 00:21:35.769 { 00:21:35.769 "dma_device_id": "system", 00:21:35.769 "dma_device_type": 1 00:21:35.769 }, 00:21:35.769 { 00:21:35.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:35.769 "dma_device_type": 2 00:21:35.769 }, 00:21:35.769 { 00:21:35.769 "dma_device_id": "system", 00:21:35.769 "dma_device_type": 1 00:21:35.769 }, 00:21:35.769 { 00:21:35.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:35.769 "dma_device_type": 2 00:21:35.769 } 00:21:35.769 ], 00:21:35.769 "driver_specific": { 00:21:35.769 "raid": { 00:21:35.769 "uuid": "5e6bd1f7-59d9-4a12-8413-cb4d4ef88407", 00:21:35.769 "strip_size_kb": 0, 00:21:35.769 "state": "online", 00:21:35.769 "raid_level": "raid1", 00:21:35.769 "superblock": true, 00:21:35.769 "num_base_bdevs": 2, 00:21:35.769 "num_base_bdevs_discovered": 2, 00:21:35.769 "num_base_bdevs_operational": 2, 00:21:35.769 "base_bdevs_list": [ 00:21:35.769 { 00:21:35.769 "name": "pt1", 00:21:35.769 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:35.769 "is_configured": true, 00:21:35.769 "data_offset": 256, 00:21:35.769 "data_size": 7936 00:21:35.769 }, 00:21:35.769 { 00:21:35.769 "name": "pt2", 00:21:35.769 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:35.769 "is_configured": true, 00:21:35.769 "data_offset": 256, 00:21:35.769 "data_size": 7936 00:21:35.769 } 00:21:35.769 ] 00:21:35.769 } 00:21:35.769 } 00:21:35.769 }' 00:21:35.769 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:35.769 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:35.769 pt2' 00:21:35.769 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:35.769 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:21:35.769 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:35.769 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:35.769 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:35.769 18:20:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.769 18:20:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:35.769 18:20:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.769 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:21:35.769 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:21:35.769 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:35.769 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:35.769 18:20:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.769 18:20:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:35.769 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:35.769 18:20:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.769 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:21:35.769 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:21:35.769 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:21:35.769 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:35.769 18:20:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.769 18:20:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:35.769 [2024-12-06 18:20:01.264789] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:35.769 18:20:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.027 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 5e6bd1f7-59d9-4a12-8413-cb4d4ef88407 '!=' 5e6bd1f7-59d9-4a12-8413-cb4d4ef88407 ']' 00:21:36.027 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:21:36.027 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:36.027 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:21:36.027 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:21:36.027 18:20:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.027 18:20:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:36.027 [2024-12-06 18:20:01.312533] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:36.027 18:20:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.027 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:36.027 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:36.027 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:36.027 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:36.027 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:36.027 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:36.027 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:36.027 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:36.027 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:36.027 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:36.027 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:36.027 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:36.027 18:20:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.027 18:20:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:36.027 18:20:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.027 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:36.027 "name": "raid_bdev1", 00:21:36.027 "uuid": "5e6bd1f7-59d9-4a12-8413-cb4d4ef88407", 00:21:36.027 "strip_size_kb": 0, 00:21:36.027 "state": "online", 00:21:36.027 "raid_level": "raid1", 00:21:36.027 "superblock": true, 00:21:36.027 "num_base_bdevs": 2, 00:21:36.027 "num_base_bdevs_discovered": 1, 00:21:36.027 "num_base_bdevs_operational": 1, 00:21:36.027 "base_bdevs_list": [ 00:21:36.027 { 00:21:36.027 "name": null, 00:21:36.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:36.027 "is_configured": false, 00:21:36.027 "data_offset": 0, 00:21:36.027 "data_size": 7936 00:21:36.027 }, 00:21:36.027 { 00:21:36.027 "name": "pt2", 00:21:36.027 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:36.027 "is_configured": true, 00:21:36.027 "data_offset": 256, 00:21:36.027 "data_size": 7936 00:21:36.027 } 00:21:36.027 ] 00:21:36.027 }' 00:21:36.027 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:36.027 18:20:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:36.593 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:36.593 18:20:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.593 18:20:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:36.593 [2024-12-06 18:20:01.848707] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:36.593 [2024-12-06 18:20:01.848740] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:36.593 [2024-12-06 18:20:01.848849] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:36.593 [2024-12-06 18:20:01.848940] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:36.593 [2024-12-06 18:20:01.848961] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:36.593 18:20:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.594 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:36.594 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:21:36.594 18:20:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.594 18:20:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:36.594 18:20:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.594 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:21:36.594 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:21:36.594 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:21:36.594 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:36.594 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:21:36.594 18:20:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.594 18:20:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:36.594 18:20:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.594 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:36.594 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:36.594 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:21:36.594 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:36.594 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:21:36.594 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:36.594 18:20:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.594 18:20:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:36.594 [2024-12-06 18:20:01.916700] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:36.594 [2024-12-06 18:20:01.916809] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:36.594 [2024-12-06 18:20:01.916836] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:21:36.594 [2024-12-06 18:20:01.916852] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:36.594 [2024-12-06 18:20:01.919596] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:36.594 [2024-12-06 18:20:01.919806] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:36.594 [2024-12-06 18:20:01.919886] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:36.594 [2024-12-06 18:20:01.919951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:36.594 [2024-12-06 18:20:01.920071] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:36.594 [2024-12-06 18:20:01.920093] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:36.594 [2024-12-06 18:20:01.920181] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:36.594 [2024-12-06 18:20:01.920332] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:36.594 [2024-12-06 18:20:01.920361] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:21:36.594 [2024-12-06 18:20:01.920488] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:36.594 pt2 00:21:36.594 18:20:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.594 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:36.594 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:36.594 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:36.594 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:36.594 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:36.594 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:36.594 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:36.594 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:36.594 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:36.594 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:36.594 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:36.594 18:20:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.594 18:20:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:36.594 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:36.594 18:20:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.594 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:36.594 "name": "raid_bdev1", 00:21:36.594 "uuid": "5e6bd1f7-59d9-4a12-8413-cb4d4ef88407", 00:21:36.594 "strip_size_kb": 0, 00:21:36.594 "state": "online", 00:21:36.594 "raid_level": "raid1", 00:21:36.594 "superblock": true, 00:21:36.594 "num_base_bdevs": 2, 00:21:36.594 "num_base_bdevs_discovered": 1, 00:21:36.594 "num_base_bdevs_operational": 1, 00:21:36.594 "base_bdevs_list": [ 00:21:36.594 { 00:21:36.594 "name": null, 00:21:36.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:36.594 "is_configured": false, 00:21:36.594 "data_offset": 256, 00:21:36.594 "data_size": 7936 00:21:36.594 }, 00:21:36.594 { 00:21:36.594 "name": "pt2", 00:21:36.594 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:36.594 "is_configured": true, 00:21:36.594 "data_offset": 256, 00:21:36.594 "data_size": 7936 00:21:36.594 } 00:21:36.594 ] 00:21:36.594 }' 00:21:36.594 18:20:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:36.594 18:20:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:37.161 18:20:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:37.161 18:20:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.161 18:20:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:37.161 [2024-12-06 18:20:02.448820] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:37.161 [2024-12-06 18:20:02.448866] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:37.161 [2024-12-06 18:20:02.448954] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:37.161 [2024-12-06 18:20:02.449036] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:37.161 [2024-12-06 18:20:02.449063] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:21:37.161 18:20:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.161 18:20:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:37.161 18:20:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.161 18:20:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:37.161 18:20:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:21:37.161 18:20:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.161 18:20:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:21:37.161 18:20:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:21:37.162 18:20:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:21:37.162 18:20:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:37.162 18:20:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.162 18:20:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:37.162 [2024-12-06 18:20:02.512895] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:37.162 [2024-12-06 18:20:02.512971] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:37.162 [2024-12-06 18:20:02.513003] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:21:37.162 [2024-12-06 18:20:02.513018] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:37.162 [2024-12-06 18:20:02.515685] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:37.162 [2024-12-06 18:20:02.515874] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:37.162 [2024-12-06 18:20:02.515967] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:37.162 [2024-12-06 18:20:02.516032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:37.162 [2024-12-06 18:20:02.516197] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:37.162 [2024-12-06 18:20:02.516215] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:37.162 [2024-12-06 18:20:02.516241] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:21:37.162 [2024-12-06 18:20:02.516323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:37.162 [2024-12-06 18:20:02.516424] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:21:37.162 [2024-12-06 18:20:02.516439] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:37.162 [2024-12-06 18:20:02.516516] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:37.162 [2024-12-06 18:20:02.516695] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:21:37.162 [2024-12-06 18:20:02.516713] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:21:37.162 [2024-12-06 18:20:02.516922] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:37.162 pt1 00:21:37.162 18:20:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.162 18:20:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:21:37.162 18:20:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:37.162 18:20:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:37.162 18:20:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:37.162 18:20:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:37.162 18:20:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:37.162 18:20:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:37.162 18:20:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:37.162 18:20:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:37.162 18:20:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:37.162 18:20:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:37.162 18:20:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:37.162 18:20:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:37.162 18:20:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.162 18:20:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:37.162 18:20:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.162 18:20:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:37.162 "name": "raid_bdev1", 00:21:37.162 "uuid": "5e6bd1f7-59d9-4a12-8413-cb4d4ef88407", 00:21:37.162 "strip_size_kb": 0, 00:21:37.162 "state": "online", 00:21:37.162 "raid_level": "raid1", 00:21:37.162 "superblock": true, 00:21:37.162 "num_base_bdevs": 2, 00:21:37.162 "num_base_bdevs_discovered": 1, 00:21:37.162 "num_base_bdevs_operational": 1, 00:21:37.162 "base_bdevs_list": [ 00:21:37.162 { 00:21:37.162 "name": null, 00:21:37.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.162 "is_configured": false, 00:21:37.162 "data_offset": 256, 00:21:37.162 "data_size": 7936 00:21:37.162 }, 00:21:37.162 { 00:21:37.162 "name": "pt2", 00:21:37.162 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:37.162 "is_configured": true, 00:21:37.162 "data_offset": 256, 00:21:37.162 "data_size": 7936 00:21:37.162 } 00:21:37.162 ] 00:21:37.162 }' 00:21:37.162 18:20:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:37.162 18:20:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:37.729 18:20:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:21:37.729 18:20:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:37.729 18:20:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.729 18:20:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:37.729 18:20:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.729 18:20:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:21:37.729 18:20:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:37.729 18:20:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.729 18:20:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:37.729 18:20:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:21:37.729 [2024-12-06 18:20:03.117534] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:37.729 18:20:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.729 18:20:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 5e6bd1f7-59d9-4a12-8413-cb4d4ef88407 '!=' 5e6bd1f7-59d9-4a12-8413-cb4d4ef88407 ']' 00:21:37.729 18:20:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87934 00:21:37.729 18:20:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87934 ']' 00:21:37.729 18:20:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87934 00:21:37.729 18:20:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:21:37.729 18:20:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:37.729 18:20:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87934 00:21:37.729 killing process with pid 87934 00:21:37.729 18:20:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:37.729 18:20:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:37.729 18:20:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87934' 00:21:37.729 18:20:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87934 00:21:37.729 [2024-12-06 18:20:03.195123] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:37.729 18:20:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87934 00:21:37.729 [2024-12-06 18:20:03.195295] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:37.729 [2024-12-06 18:20:03.195394] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:37.729 [2024-12-06 18:20:03.195431] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:21:37.987 [2024-12-06 18:20:03.404882] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:38.923 ************************************ 00:21:38.923 END TEST raid_superblock_test_md_separate 00:21:38.923 ************************************ 00:21:38.923 18:20:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:21:38.923 00:21:38.923 real 0m6.787s 00:21:38.923 user 0m10.717s 00:21:38.923 sys 0m1.049s 00:21:38.923 18:20:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:38.923 18:20:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:39.182 18:20:04 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:21:39.182 18:20:04 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:21:39.182 18:20:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:21:39.182 18:20:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:39.182 18:20:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:39.182 ************************************ 00:21:39.182 START TEST raid_rebuild_test_sb_md_separate 00:21:39.182 ************************************ 00:21:39.182 18:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:21:39.182 18:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:21:39.182 18:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:21:39.182 18:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:21:39.182 18:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:21:39.182 18:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:21:39.182 18:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:21:39.182 18:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:39.182 18:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:21:39.182 18:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:39.182 18:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:39.182 18:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:21:39.182 18:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:39.182 18:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:39.182 18:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:39.182 18:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:21:39.182 18:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:21:39.182 18:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:21:39.182 18:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:21:39.182 18:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:21:39.182 18:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:21:39.182 18:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:21:39.182 18:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:21:39.182 18:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:21:39.182 18:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:21:39.182 18:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88268 00:21:39.182 18:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88268 00:21:39.182 18:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:39.182 18:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88268 ']' 00:21:39.182 18:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:39.182 18:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:39.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:39.182 18:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:39.182 18:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:39.182 18:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:39.182 [2024-12-06 18:20:04.600650] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:21:39.182 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:39.182 Zero copy mechanism will not be used. 00:21:39.182 [2024-12-06 18:20:04.600861] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88268 ] 00:21:39.441 [2024-12-06 18:20:04.790989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.441 [2024-12-06 18:20:04.952954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:39.700 [2024-12-06 18:20:05.178825] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:39.700 [2024-12-06 18:20:05.178913] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:40.268 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:40.268 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:21:40.268 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:40.268 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:21:40.268 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.268 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:40.268 BaseBdev1_malloc 00:21:40.268 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.268 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:40.268 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.268 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:40.268 [2024-12-06 18:20:05.708422] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:40.268 [2024-12-06 18:20:05.708522] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:40.268 [2024-12-06 18:20:05.708553] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:40.268 [2024-12-06 18:20:05.708570] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:40.268 [2024-12-06 18:20:05.711202] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:40.268 [2024-12-06 18:20:05.711264] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:40.268 BaseBdev1 00:21:40.268 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.268 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:40.268 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:21:40.268 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.268 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:40.268 BaseBdev2_malloc 00:21:40.268 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.268 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:40.268 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.268 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:40.268 [2024-12-06 18:20:05.762369] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:40.268 [2024-12-06 18:20:05.762492] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:40.268 [2024-12-06 18:20:05.762521] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:40.268 [2024-12-06 18:20:05.762539] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:40.268 [2024-12-06 18:20:05.765210] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:40.268 [2024-12-06 18:20:05.765286] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:40.268 BaseBdev2 00:21:40.268 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.268 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:21:40.268 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.268 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:40.526 spare_malloc 00:21:40.526 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.526 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:40.526 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.526 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:40.526 spare_delay 00:21:40.526 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.526 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:40.526 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.526 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:40.526 [2024-12-06 18:20:05.829624] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:40.526 [2024-12-06 18:20:05.829712] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:40.526 [2024-12-06 18:20:05.829741] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:40.526 [2024-12-06 18:20:05.829758] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:40.526 [2024-12-06 18:20:05.832306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:40.526 [2024-12-06 18:20:05.832369] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:40.526 spare 00:21:40.526 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.526 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:21:40.526 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.526 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:40.526 [2024-12-06 18:20:05.837685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:40.526 [2024-12-06 18:20:05.840155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:40.526 [2024-12-06 18:20:05.840388] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:40.526 [2024-12-06 18:20:05.840425] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:40.526 [2024-12-06 18:20:05.840551] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:40.526 [2024-12-06 18:20:05.840730] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:40.526 [2024-12-06 18:20:05.840758] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:40.526 [2024-12-06 18:20:05.840920] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:40.526 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.526 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:40.526 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:40.526 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:40.526 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:40.526 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:40.526 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:40.526 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:40.526 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:40.526 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:40.526 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:40.526 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:40.527 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.527 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:40.527 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:40.527 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.527 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:40.527 "name": "raid_bdev1", 00:21:40.527 "uuid": "8fbe43aa-8508-4ca1-9eca-b4818d9f4ef9", 00:21:40.527 "strip_size_kb": 0, 00:21:40.527 "state": "online", 00:21:40.527 "raid_level": "raid1", 00:21:40.527 "superblock": true, 00:21:40.527 "num_base_bdevs": 2, 00:21:40.527 "num_base_bdevs_discovered": 2, 00:21:40.527 "num_base_bdevs_operational": 2, 00:21:40.527 "base_bdevs_list": [ 00:21:40.527 { 00:21:40.527 "name": "BaseBdev1", 00:21:40.527 "uuid": "6b4c85f1-e462-5eca-bc6f-7ca182dffad2", 00:21:40.527 "is_configured": true, 00:21:40.527 "data_offset": 256, 00:21:40.527 "data_size": 7936 00:21:40.527 }, 00:21:40.527 { 00:21:40.527 "name": "BaseBdev2", 00:21:40.527 "uuid": "89932c46-13a6-52b9-a3df-77e6b5a583e2", 00:21:40.527 "is_configured": true, 00:21:40.527 "data_offset": 256, 00:21:40.527 "data_size": 7936 00:21:40.527 } 00:21:40.527 ] 00:21:40.527 }' 00:21:40.527 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:40.527 18:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:41.092 18:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:41.092 18:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.092 18:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:21:41.092 18:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:41.092 [2024-12-06 18:20:06.366280] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:41.092 18:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.092 18:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:21:41.092 18:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:41.092 18:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.092 18:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:41.092 18:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:41.092 18:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.092 18:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:21:41.092 18:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:21:41.092 18:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:21:41.092 18:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:21:41.092 18:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:21:41.092 18:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:41.092 18:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:41.092 18:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:41.092 18:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:41.092 18:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:41.092 18:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:21:41.092 18:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:41.092 18:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:41.092 18:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:41.350 [2024-12-06 18:20:06.710163] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:41.350 /dev/nbd0 00:21:41.350 18:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:41.350 18:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:41.350 18:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:41.350 18:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:21:41.350 18:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:41.350 18:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:41.350 18:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:41.350 18:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:21:41.350 18:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:41.350 18:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:41.350 18:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:41.350 1+0 records in 00:21:41.350 1+0 records out 00:21:41.350 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371872 s, 11.0 MB/s 00:21:41.350 18:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:41.350 18:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:21:41.350 18:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:41.350 18:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:41.350 18:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:21:41.350 18:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:41.350 18:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:41.350 18:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:21:41.350 18:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:21:41.350 18:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:21:42.283 7936+0 records in 00:21:42.283 7936+0 records out 00:21:42.283 32505856 bytes (33 MB, 31 MiB) copied, 0.9346 s, 34.8 MB/s 00:21:42.283 18:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:21:42.283 18:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:42.283 18:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:42.283 18:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:42.283 18:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:21:42.283 18:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:42.283 18:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:42.542 18:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:42.542 [2024-12-06 18:20:08.030423] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:42.542 18:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:42.542 18:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:42.542 18:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:42.542 18:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:42.542 18:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:42.542 18:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:21:42.542 18:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:21:42.542 18:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:21:42.542 18:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.542 18:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:42.543 [2024-12-06 18:20:08.042545] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:42.543 18:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.543 18:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:42.543 18:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:42.543 18:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:42.543 18:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:42.543 18:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:42.543 18:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:42.543 18:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:42.543 18:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:42.543 18:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:42.543 18:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:42.543 18:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.543 18:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.543 18:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:42.543 18:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:42.801 18:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.801 18:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:42.801 "name": "raid_bdev1", 00:21:42.801 "uuid": "8fbe43aa-8508-4ca1-9eca-b4818d9f4ef9", 00:21:42.801 "strip_size_kb": 0, 00:21:42.801 "state": "online", 00:21:42.801 "raid_level": "raid1", 00:21:42.801 "superblock": true, 00:21:42.801 "num_base_bdevs": 2, 00:21:42.801 "num_base_bdevs_discovered": 1, 00:21:42.801 "num_base_bdevs_operational": 1, 00:21:42.801 "base_bdevs_list": [ 00:21:42.801 { 00:21:42.801 "name": null, 00:21:42.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.801 "is_configured": false, 00:21:42.801 "data_offset": 0, 00:21:42.801 "data_size": 7936 00:21:42.801 }, 00:21:42.801 { 00:21:42.801 "name": "BaseBdev2", 00:21:42.801 "uuid": "89932c46-13a6-52b9-a3df-77e6b5a583e2", 00:21:42.801 "is_configured": true, 00:21:42.801 "data_offset": 256, 00:21:42.801 "data_size": 7936 00:21:42.801 } 00:21:42.801 ] 00:21:42.801 }' 00:21:42.801 18:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:42.801 18:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:43.059 18:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:43.059 18:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.059 18:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:43.059 [2024-12-06 18:20:08.546756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:43.059 [2024-12-06 18:20:08.560931] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:21:43.059 18:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.059 18:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:21:43.059 [2024-12-06 18:20:08.563556] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:44.436 18:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:44.436 18:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:44.436 18:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:44.436 18:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:44.436 18:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:44.436 18:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:44.436 18:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.436 18:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:44.436 18:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:44.436 18:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.436 18:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:44.436 "name": "raid_bdev1", 00:21:44.436 "uuid": "8fbe43aa-8508-4ca1-9eca-b4818d9f4ef9", 00:21:44.436 "strip_size_kb": 0, 00:21:44.436 "state": "online", 00:21:44.436 "raid_level": "raid1", 00:21:44.436 "superblock": true, 00:21:44.436 "num_base_bdevs": 2, 00:21:44.436 "num_base_bdevs_discovered": 2, 00:21:44.436 "num_base_bdevs_operational": 2, 00:21:44.436 "process": { 00:21:44.436 "type": "rebuild", 00:21:44.436 "target": "spare", 00:21:44.436 "progress": { 00:21:44.436 "blocks": 2560, 00:21:44.436 "percent": 32 00:21:44.436 } 00:21:44.436 }, 00:21:44.436 "base_bdevs_list": [ 00:21:44.436 { 00:21:44.436 "name": "spare", 00:21:44.436 "uuid": "e8be5006-5946-55f2-a980-a43e2be8274e", 00:21:44.436 "is_configured": true, 00:21:44.436 "data_offset": 256, 00:21:44.436 "data_size": 7936 00:21:44.436 }, 00:21:44.436 { 00:21:44.436 "name": "BaseBdev2", 00:21:44.436 "uuid": "89932c46-13a6-52b9-a3df-77e6b5a583e2", 00:21:44.436 "is_configured": true, 00:21:44.436 "data_offset": 256, 00:21:44.436 "data_size": 7936 00:21:44.436 } 00:21:44.436 ] 00:21:44.436 }' 00:21:44.436 18:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:44.436 18:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:44.436 18:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:44.436 18:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:44.436 18:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:44.436 18:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.436 18:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:44.436 [2024-12-06 18:20:09.733380] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:44.436 [2024-12-06 18:20:09.773365] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:44.436 [2024-12-06 18:20:09.773532] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:44.436 [2024-12-06 18:20:09.773557] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:44.436 [2024-12-06 18:20:09.773575] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:44.437 18:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.437 18:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:44.437 18:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:44.437 18:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:44.437 18:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:44.437 18:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:44.437 18:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:44.437 18:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:44.437 18:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:44.437 18:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:44.437 18:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:44.437 18:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:44.437 18:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.437 18:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:44.437 18:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:44.437 18:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.437 18:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:44.437 "name": "raid_bdev1", 00:21:44.437 "uuid": "8fbe43aa-8508-4ca1-9eca-b4818d9f4ef9", 00:21:44.437 "strip_size_kb": 0, 00:21:44.437 "state": "online", 00:21:44.437 "raid_level": "raid1", 00:21:44.437 "superblock": true, 00:21:44.437 "num_base_bdevs": 2, 00:21:44.437 "num_base_bdevs_discovered": 1, 00:21:44.437 "num_base_bdevs_operational": 1, 00:21:44.437 "base_bdevs_list": [ 00:21:44.437 { 00:21:44.437 "name": null, 00:21:44.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.437 "is_configured": false, 00:21:44.437 "data_offset": 0, 00:21:44.437 "data_size": 7936 00:21:44.437 }, 00:21:44.437 { 00:21:44.437 "name": "BaseBdev2", 00:21:44.437 "uuid": "89932c46-13a6-52b9-a3df-77e6b5a583e2", 00:21:44.437 "is_configured": true, 00:21:44.437 "data_offset": 256, 00:21:44.437 "data_size": 7936 00:21:44.437 } 00:21:44.437 ] 00:21:44.437 }' 00:21:44.437 18:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:44.437 18:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:45.006 18:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:45.006 18:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:45.006 18:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:45.006 18:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:45.006 18:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:45.006 18:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:45.006 18:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:45.006 18:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.006 18:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:45.006 18:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.006 18:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:45.006 "name": "raid_bdev1", 00:21:45.006 "uuid": "8fbe43aa-8508-4ca1-9eca-b4818d9f4ef9", 00:21:45.006 "strip_size_kb": 0, 00:21:45.006 "state": "online", 00:21:45.006 "raid_level": "raid1", 00:21:45.006 "superblock": true, 00:21:45.006 "num_base_bdevs": 2, 00:21:45.006 "num_base_bdevs_discovered": 1, 00:21:45.006 "num_base_bdevs_operational": 1, 00:21:45.006 "base_bdevs_list": [ 00:21:45.006 { 00:21:45.006 "name": null, 00:21:45.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.006 "is_configured": false, 00:21:45.006 "data_offset": 0, 00:21:45.006 "data_size": 7936 00:21:45.006 }, 00:21:45.006 { 00:21:45.006 "name": "BaseBdev2", 00:21:45.006 "uuid": "89932c46-13a6-52b9-a3df-77e6b5a583e2", 00:21:45.006 "is_configured": true, 00:21:45.006 "data_offset": 256, 00:21:45.006 "data_size": 7936 00:21:45.006 } 00:21:45.006 ] 00:21:45.006 }' 00:21:45.006 18:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:45.006 18:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:45.006 18:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:45.006 18:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:45.006 18:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:45.006 18:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.006 18:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:45.006 [2024-12-06 18:20:10.497291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:45.006 [2024-12-06 18:20:10.510272] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:21:45.006 18:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.006 18:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:45.006 [2024-12-06 18:20:10.512730] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:46.380 18:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:46.380 18:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:46.380 18:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:46.380 18:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:46.380 18:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:46.380 18:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:46.380 18:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.380 18:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:46.380 18:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:46.380 18:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.380 18:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:46.380 "name": "raid_bdev1", 00:21:46.380 "uuid": "8fbe43aa-8508-4ca1-9eca-b4818d9f4ef9", 00:21:46.380 "strip_size_kb": 0, 00:21:46.380 "state": "online", 00:21:46.380 "raid_level": "raid1", 00:21:46.380 "superblock": true, 00:21:46.380 "num_base_bdevs": 2, 00:21:46.380 "num_base_bdevs_discovered": 2, 00:21:46.380 "num_base_bdevs_operational": 2, 00:21:46.380 "process": { 00:21:46.380 "type": "rebuild", 00:21:46.380 "target": "spare", 00:21:46.380 "progress": { 00:21:46.380 "blocks": 2560, 00:21:46.380 "percent": 32 00:21:46.380 } 00:21:46.380 }, 00:21:46.380 "base_bdevs_list": [ 00:21:46.380 { 00:21:46.380 "name": "spare", 00:21:46.380 "uuid": "e8be5006-5946-55f2-a980-a43e2be8274e", 00:21:46.380 "is_configured": true, 00:21:46.380 "data_offset": 256, 00:21:46.380 "data_size": 7936 00:21:46.380 }, 00:21:46.380 { 00:21:46.380 "name": "BaseBdev2", 00:21:46.380 "uuid": "89932c46-13a6-52b9-a3df-77e6b5a583e2", 00:21:46.380 "is_configured": true, 00:21:46.380 "data_offset": 256, 00:21:46.380 "data_size": 7936 00:21:46.380 } 00:21:46.380 ] 00:21:46.380 }' 00:21:46.380 18:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:46.380 18:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:46.380 18:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:46.380 18:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:46.380 18:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:21:46.380 18:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:21:46.380 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:21:46.380 18:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:21:46.380 18:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:21:46.380 18:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:21:46.380 18:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=767 00:21:46.380 18:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:46.380 18:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:46.380 18:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:46.380 18:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:46.380 18:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:46.380 18:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:46.380 18:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:46.380 18:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.380 18:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:46.380 18:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:46.380 18:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.380 18:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:46.380 "name": "raid_bdev1", 00:21:46.380 "uuid": "8fbe43aa-8508-4ca1-9eca-b4818d9f4ef9", 00:21:46.380 "strip_size_kb": 0, 00:21:46.380 "state": "online", 00:21:46.380 "raid_level": "raid1", 00:21:46.380 "superblock": true, 00:21:46.380 "num_base_bdevs": 2, 00:21:46.380 "num_base_bdevs_discovered": 2, 00:21:46.380 "num_base_bdevs_operational": 2, 00:21:46.380 "process": { 00:21:46.380 "type": "rebuild", 00:21:46.380 "target": "spare", 00:21:46.380 "progress": { 00:21:46.380 "blocks": 2816, 00:21:46.380 "percent": 35 00:21:46.380 } 00:21:46.380 }, 00:21:46.380 "base_bdevs_list": [ 00:21:46.380 { 00:21:46.380 "name": "spare", 00:21:46.380 "uuid": "e8be5006-5946-55f2-a980-a43e2be8274e", 00:21:46.380 "is_configured": true, 00:21:46.380 "data_offset": 256, 00:21:46.380 "data_size": 7936 00:21:46.380 }, 00:21:46.380 { 00:21:46.380 "name": "BaseBdev2", 00:21:46.380 "uuid": "89932c46-13a6-52b9-a3df-77e6b5a583e2", 00:21:46.380 "is_configured": true, 00:21:46.380 "data_offset": 256, 00:21:46.380 "data_size": 7936 00:21:46.380 } 00:21:46.380 ] 00:21:46.380 }' 00:21:46.380 18:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:46.380 18:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:46.380 18:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:46.380 18:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:46.380 18:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:47.316 18:20:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:47.316 18:20:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:47.316 18:20:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:47.316 18:20:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:47.316 18:20:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:47.316 18:20:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:47.316 18:20:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:47.574 18:20:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:47.574 18:20:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.574 18:20:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:47.574 18:20:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.574 18:20:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:47.574 "name": "raid_bdev1", 00:21:47.574 "uuid": "8fbe43aa-8508-4ca1-9eca-b4818d9f4ef9", 00:21:47.574 "strip_size_kb": 0, 00:21:47.574 "state": "online", 00:21:47.574 "raid_level": "raid1", 00:21:47.574 "superblock": true, 00:21:47.574 "num_base_bdevs": 2, 00:21:47.574 "num_base_bdevs_discovered": 2, 00:21:47.574 "num_base_bdevs_operational": 2, 00:21:47.574 "process": { 00:21:47.574 "type": "rebuild", 00:21:47.574 "target": "spare", 00:21:47.574 "progress": { 00:21:47.574 "blocks": 5888, 00:21:47.574 "percent": 74 00:21:47.574 } 00:21:47.574 }, 00:21:47.574 "base_bdevs_list": [ 00:21:47.574 { 00:21:47.574 "name": "spare", 00:21:47.574 "uuid": "e8be5006-5946-55f2-a980-a43e2be8274e", 00:21:47.574 "is_configured": true, 00:21:47.574 "data_offset": 256, 00:21:47.574 "data_size": 7936 00:21:47.574 }, 00:21:47.574 { 00:21:47.574 "name": "BaseBdev2", 00:21:47.574 "uuid": "89932c46-13a6-52b9-a3df-77e6b5a583e2", 00:21:47.574 "is_configured": true, 00:21:47.574 "data_offset": 256, 00:21:47.574 "data_size": 7936 00:21:47.574 } 00:21:47.574 ] 00:21:47.574 }' 00:21:47.574 18:20:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:47.574 18:20:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:47.574 18:20:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:47.574 18:20:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:47.574 18:20:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:48.165 [2024-12-06 18:20:13.636193] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:48.165 [2024-12-06 18:20:13.636314] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:48.165 [2024-12-06 18:20:13.636463] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:48.732 18:20:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:48.732 18:20:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:48.732 18:20:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:48.732 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:48.732 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:48.732 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:48.732 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:48.732 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.732 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.732 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:48.732 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.732 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:48.732 "name": "raid_bdev1", 00:21:48.732 "uuid": "8fbe43aa-8508-4ca1-9eca-b4818d9f4ef9", 00:21:48.732 "strip_size_kb": 0, 00:21:48.732 "state": "online", 00:21:48.732 "raid_level": "raid1", 00:21:48.732 "superblock": true, 00:21:48.732 "num_base_bdevs": 2, 00:21:48.732 "num_base_bdevs_discovered": 2, 00:21:48.732 "num_base_bdevs_operational": 2, 00:21:48.732 "base_bdevs_list": [ 00:21:48.732 { 00:21:48.732 "name": "spare", 00:21:48.732 "uuid": "e8be5006-5946-55f2-a980-a43e2be8274e", 00:21:48.732 "is_configured": true, 00:21:48.732 "data_offset": 256, 00:21:48.732 "data_size": 7936 00:21:48.732 }, 00:21:48.732 { 00:21:48.732 "name": "BaseBdev2", 00:21:48.732 "uuid": "89932c46-13a6-52b9-a3df-77e6b5a583e2", 00:21:48.732 "is_configured": true, 00:21:48.732 "data_offset": 256, 00:21:48.732 "data_size": 7936 00:21:48.732 } 00:21:48.732 ] 00:21:48.732 }' 00:21:48.732 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:48.732 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:48.732 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:48.732 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:48.732 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:21:48.732 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:48.732 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:48.732 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:48.732 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:48.732 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:48.732 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:48.732 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.732 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.732 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:48.732 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.732 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:48.732 "name": "raid_bdev1", 00:21:48.732 "uuid": "8fbe43aa-8508-4ca1-9eca-b4818d9f4ef9", 00:21:48.732 "strip_size_kb": 0, 00:21:48.732 "state": "online", 00:21:48.732 "raid_level": "raid1", 00:21:48.732 "superblock": true, 00:21:48.732 "num_base_bdevs": 2, 00:21:48.732 "num_base_bdevs_discovered": 2, 00:21:48.732 "num_base_bdevs_operational": 2, 00:21:48.732 "base_bdevs_list": [ 00:21:48.732 { 00:21:48.732 "name": "spare", 00:21:48.732 "uuid": "e8be5006-5946-55f2-a980-a43e2be8274e", 00:21:48.732 "is_configured": true, 00:21:48.732 "data_offset": 256, 00:21:48.732 "data_size": 7936 00:21:48.732 }, 00:21:48.732 { 00:21:48.732 "name": "BaseBdev2", 00:21:48.732 "uuid": "89932c46-13a6-52b9-a3df-77e6b5a583e2", 00:21:48.732 "is_configured": true, 00:21:48.732 "data_offset": 256, 00:21:48.732 "data_size": 7936 00:21:48.732 } 00:21:48.732 ] 00:21:48.732 }' 00:21:48.732 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:48.992 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:48.992 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:48.992 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:48.992 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:48.992 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:48.992 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:48.992 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:48.992 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:48.992 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:48.992 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:48.992 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:48.992 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:48.992 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:48.992 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.992 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:48.992 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.992 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:48.992 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.992 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:48.992 "name": "raid_bdev1", 00:21:48.992 "uuid": "8fbe43aa-8508-4ca1-9eca-b4818d9f4ef9", 00:21:48.992 "strip_size_kb": 0, 00:21:48.992 "state": "online", 00:21:48.992 "raid_level": "raid1", 00:21:48.992 "superblock": true, 00:21:48.992 "num_base_bdevs": 2, 00:21:48.992 "num_base_bdevs_discovered": 2, 00:21:48.992 "num_base_bdevs_operational": 2, 00:21:48.992 "base_bdevs_list": [ 00:21:48.992 { 00:21:48.992 "name": "spare", 00:21:48.992 "uuid": "e8be5006-5946-55f2-a980-a43e2be8274e", 00:21:48.992 "is_configured": true, 00:21:48.992 "data_offset": 256, 00:21:48.992 "data_size": 7936 00:21:48.992 }, 00:21:48.992 { 00:21:48.992 "name": "BaseBdev2", 00:21:48.992 "uuid": "89932c46-13a6-52b9-a3df-77e6b5a583e2", 00:21:48.992 "is_configured": true, 00:21:48.992 "data_offset": 256, 00:21:48.992 "data_size": 7936 00:21:48.992 } 00:21:48.992 ] 00:21:48.992 }' 00:21:48.992 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:48.992 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:49.561 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:49.561 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.561 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:49.561 [2024-12-06 18:20:14.851682] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:49.561 [2024-12-06 18:20:14.851722] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:49.561 [2024-12-06 18:20:14.851885] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:49.561 [2024-12-06 18:20:14.851992] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:49.561 [2024-12-06 18:20:14.852018] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:49.561 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.561 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:49.561 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.562 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:49.562 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:21:49.562 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.562 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:49.562 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:49.562 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:21:49.562 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:49.562 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:49.562 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:49.562 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:49.562 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:49.562 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:49.562 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:21:49.562 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:49.562 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:49.562 18:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:49.820 /dev/nbd0 00:21:49.820 18:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:49.820 18:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:49.820 18:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:49.820 18:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:21:49.820 18:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:49.820 18:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:49.820 18:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:49.820 18:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:21:49.820 18:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:49.820 18:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:49.820 18:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:49.820 1+0 records in 00:21:49.820 1+0 records out 00:21:49.820 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301352 s, 13.6 MB/s 00:21:49.820 18:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:49.820 18:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:21:49.820 18:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:49.820 18:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:49.821 18:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:21:49.821 18:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:49.821 18:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:49.821 18:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:21:50.079 /dev/nbd1 00:21:50.079 18:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:50.079 18:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:50.079 18:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:50.079 18:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:21:50.079 18:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:50.079 18:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:50.079 18:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:50.079 18:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:21:50.079 18:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:50.079 18:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:50.079 18:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:50.079 1+0 records in 00:21:50.079 1+0 records out 00:21:50.079 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000399634 s, 10.2 MB/s 00:21:50.337 18:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:50.337 18:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:21:50.337 18:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:50.337 18:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:50.337 18:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:21:50.337 18:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:50.338 18:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:50.338 18:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:50.338 18:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:21:50.338 18:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:50.338 18:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:50.338 18:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:50.338 18:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:21:50.338 18:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:50.338 18:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:50.597 18:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:50.597 18:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:50.597 18:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:50.597 18:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:50.597 18:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:50.597 18:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:50.597 18:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:21:50.597 18:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:21:50.597 18:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:50.597 18:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:51.165 18:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:51.165 18:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:51.165 18:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:51.165 18:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:51.165 18:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:51.165 18:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:51.165 18:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:21:51.165 18:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:21:51.165 18:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:21:51.165 18:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:21:51.165 18:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.165 18:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:51.166 18:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.166 18:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:51.166 18:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.166 18:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:51.166 [2024-12-06 18:20:16.409448] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:51.166 [2024-12-06 18:20:16.409538] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:51.166 [2024-12-06 18:20:16.409569] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:51.166 [2024-12-06 18:20:16.409584] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:51.166 [2024-12-06 18:20:16.412407] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:51.166 [2024-12-06 18:20:16.412498] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:51.166 [2024-12-06 18:20:16.412595] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:51.166 [2024-12-06 18:20:16.412659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:51.166 [2024-12-06 18:20:16.412884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:51.166 spare 00:21:51.166 18:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.166 18:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:21:51.166 18:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.166 18:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:51.166 [2024-12-06 18:20:16.513011] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:21:51.166 [2024-12-06 18:20:16.513086] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:51.166 [2024-12-06 18:20:16.513253] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:21:51.166 [2024-12-06 18:20:16.513497] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:21:51.166 [2024-12-06 18:20:16.513527] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:21:51.166 [2024-12-06 18:20:16.513714] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:51.166 18:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.166 18:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:51.166 18:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:51.166 18:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:51.166 18:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:51.166 18:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:51.166 18:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:51.166 18:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:51.166 18:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:51.166 18:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:51.166 18:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:51.166 18:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:51.166 18:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.166 18:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:51.166 18:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:51.166 18:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.166 18:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:51.166 "name": "raid_bdev1", 00:21:51.166 "uuid": "8fbe43aa-8508-4ca1-9eca-b4818d9f4ef9", 00:21:51.166 "strip_size_kb": 0, 00:21:51.166 "state": "online", 00:21:51.166 "raid_level": "raid1", 00:21:51.166 "superblock": true, 00:21:51.166 "num_base_bdevs": 2, 00:21:51.166 "num_base_bdevs_discovered": 2, 00:21:51.166 "num_base_bdevs_operational": 2, 00:21:51.166 "base_bdevs_list": [ 00:21:51.166 { 00:21:51.166 "name": "spare", 00:21:51.166 "uuid": "e8be5006-5946-55f2-a980-a43e2be8274e", 00:21:51.166 "is_configured": true, 00:21:51.166 "data_offset": 256, 00:21:51.166 "data_size": 7936 00:21:51.166 }, 00:21:51.166 { 00:21:51.166 "name": "BaseBdev2", 00:21:51.166 "uuid": "89932c46-13a6-52b9-a3df-77e6b5a583e2", 00:21:51.166 "is_configured": true, 00:21:51.166 "data_offset": 256, 00:21:51.166 "data_size": 7936 00:21:51.166 } 00:21:51.166 ] 00:21:51.166 }' 00:21:51.166 18:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:51.166 18:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:51.838 18:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:51.838 18:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:51.838 18:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:51.838 18:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:51.838 18:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:51.838 18:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:51.838 18:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.838 18:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:51.838 18:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:51.838 18:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.838 18:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:51.838 "name": "raid_bdev1", 00:21:51.838 "uuid": "8fbe43aa-8508-4ca1-9eca-b4818d9f4ef9", 00:21:51.838 "strip_size_kb": 0, 00:21:51.838 "state": "online", 00:21:51.838 "raid_level": "raid1", 00:21:51.838 "superblock": true, 00:21:51.838 "num_base_bdevs": 2, 00:21:51.838 "num_base_bdevs_discovered": 2, 00:21:51.838 "num_base_bdevs_operational": 2, 00:21:51.838 "base_bdevs_list": [ 00:21:51.838 { 00:21:51.838 "name": "spare", 00:21:51.838 "uuid": "e8be5006-5946-55f2-a980-a43e2be8274e", 00:21:51.838 "is_configured": true, 00:21:51.838 "data_offset": 256, 00:21:51.838 "data_size": 7936 00:21:51.838 }, 00:21:51.838 { 00:21:51.838 "name": "BaseBdev2", 00:21:51.838 "uuid": "89932c46-13a6-52b9-a3df-77e6b5a583e2", 00:21:51.838 "is_configured": true, 00:21:51.838 "data_offset": 256, 00:21:51.838 "data_size": 7936 00:21:51.838 } 00:21:51.838 ] 00:21:51.838 }' 00:21:51.838 18:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:51.838 18:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:51.838 18:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:51.838 18:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:51.838 18:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:51.838 18:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.838 18:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:51.838 18:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:51.838 18:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.838 18:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:21:51.838 18:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:51.838 18:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.838 18:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:51.838 [2024-12-06 18:20:17.274108] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:51.838 18:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.838 18:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:51.838 18:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:51.838 18:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:51.838 18:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:51.838 18:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:51.838 18:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:51.838 18:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:51.838 18:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:51.838 18:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:51.838 18:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:51.838 18:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:51.838 18:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.838 18:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:51.838 18:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:51.838 18:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.838 18:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:51.838 "name": "raid_bdev1", 00:21:51.838 "uuid": "8fbe43aa-8508-4ca1-9eca-b4818d9f4ef9", 00:21:51.838 "strip_size_kb": 0, 00:21:51.838 "state": "online", 00:21:51.838 "raid_level": "raid1", 00:21:51.838 "superblock": true, 00:21:51.838 "num_base_bdevs": 2, 00:21:51.838 "num_base_bdevs_discovered": 1, 00:21:51.838 "num_base_bdevs_operational": 1, 00:21:51.838 "base_bdevs_list": [ 00:21:51.838 { 00:21:51.838 "name": null, 00:21:51.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:51.838 "is_configured": false, 00:21:51.838 "data_offset": 0, 00:21:51.838 "data_size": 7936 00:21:51.838 }, 00:21:51.838 { 00:21:51.838 "name": "BaseBdev2", 00:21:51.838 "uuid": "89932c46-13a6-52b9-a3df-77e6b5a583e2", 00:21:51.838 "is_configured": true, 00:21:51.838 "data_offset": 256, 00:21:51.838 "data_size": 7936 00:21:51.838 } 00:21:51.838 ] 00:21:51.838 }' 00:21:51.838 18:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:51.838 18:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:52.406 18:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:52.406 18:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.406 18:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:52.406 [2024-12-06 18:20:17.786280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:52.406 [2024-12-06 18:20:17.786566] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:52.406 [2024-12-06 18:20:17.786594] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:52.406 [2024-12-06 18:20:17.786650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:52.406 [2024-12-06 18:20:17.799890] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:21:52.406 18:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.406 18:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:21:52.406 [2024-12-06 18:20:17.802520] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:53.343 18:20:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:53.343 18:20:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:53.343 18:20:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:53.343 18:20:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:53.343 18:20:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:53.343 18:20:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:53.343 18:20:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.343 18:20:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:53.343 18:20:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:53.343 18:20:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.343 18:20:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:53.343 "name": "raid_bdev1", 00:21:53.343 "uuid": "8fbe43aa-8508-4ca1-9eca-b4818d9f4ef9", 00:21:53.343 "strip_size_kb": 0, 00:21:53.343 "state": "online", 00:21:53.343 "raid_level": "raid1", 00:21:53.343 "superblock": true, 00:21:53.343 "num_base_bdevs": 2, 00:21:53.343 "num_base_bdevs_discovered": 2, 00:21:53.343 "num_base_bdevs_operational": 2, 00:21:53.343 "process": { 00:21:53.343 "type": "rebuild", 00:21:53.343 "target": "spare", 00:21:53.343 "progress": { 00:21:53.344 "blocks": 2560, 00:21:53.344 "percent": 32 00:21:53.344 } 00:21:53.344 }, 00:21:53.344 "base_bdevs_list": [ 00:21:53.344 { 00:21:53.344 "name": "spare", 00:21:53.344 "uuid": "e8be5006-5946-55f2-a980-a43e2be8274e", 00:21:53.344 "is_configured": true, 00:21:53.344 "data_offset": 256, 00:21:53.344 "data_size": 7936 00:21:53.344 }, 00:21:53.344 { 00:21:53.344 "name": "BaseBdev2", 00:21:53.344 "uuid": "89932c46-13a6-52b9-a3df-77e6b5a583e2", 00:21:53.344 "is_configured": true, 00:21:53.344 "data_offset": 256, 00:21:53.344 "data_size": 7936 00:21:53.344 } 00:21:53.344 ] 00:21:53.344 }' 00:21:53.344 18:20:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:53.602 18:20:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:53.602 18:20:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:53.602 18:20:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:53.602 18:20:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:21:53.602 18:20:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.602 18:20:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:53.602 [2024-12-06 18:20:18.964559] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:53.602 [2024-12-06 18:20:19.011711] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:53.602 [2024-12-06 18:20:19.011814] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:53.602 [2024-12-06 18:20:19.011836] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:53.602 [2024-12-06 18:20:19.011862] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:53.602 18:20:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.602 18:20:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:53.602 18:20:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:53.602 18:20:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:53.602 18:20:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:53.602 18:20:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:53.602 18:20:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:53.602 18:20:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:53.602 18:20:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:53.602 18:20:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:53.602 18:20:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:53.602 18:20:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:53.602 18:20:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:53.602 18:20:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.602 18:20:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:53.603 18:20:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.603 18:20:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:53.603 "name": "raid_bdev1", 00:21:53.603 "uuid": "8fbe43aa-8508-4ca1-9eca-b4818d9f4ef9", 00:21:53.603 "strip_size_kb": 0, 00:21:53.603 "state": "online", 00:21:53.603 "raid_level": "raid1", 00:21:53.603 "superblock": true, 00:21:53.603 "num_base_bdevs": 2, 00:21:53.603 "num_base_bdevs_discovered": 1, 00:21:53.603 "num_base_bdevs_operational": 1, 00:21:53.603 "base_bdevs_list": [ 00:21:53.603 { 00:21:53.603 "name": null, 00:21:53.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.603 "is_configured": false, 00:21:53.603 "data_offset": 0, 00:21:53.603 "data_size": 7936 00:21:53.603 }, 00:21:53.603 { 00:21:53.603 "name": "BaseBdev2", 00:21:53.603 "uuid": "89932c46-13a6-52b9-a3df-77e6b5a583e2", 00:21:53.603 "is_configured": true, 00:21:53.603 "data_offset": 256, 00:21:53.603 "data_size": 7936 00:21:53.603 } 00:21:53.603 ] 00:21:53.603 }' 00:21:53.603 18:20:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:53.603 18:20:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:54.170 18:20:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:54.170 18:20:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.170 18:20:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:54.170 [2024-12-06 18:20:19.537991] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:54.170 [2024-12-06 18:20:19.538090] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:54.170 [2024-12-06 18:20:19.538146] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:21:54.170 [2024-12-06 18:20:19.538164] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:54.170 [2024-12-06 18:20:19.538522] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:54.170 [2024-12-06 18:20:19.538551] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:54.170 [2024-12-06 18:20:19.538649] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:54.170 [2024-12-06 18:20:19.538673] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:54.170 [2024-12-06 18:20:19.538688] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:54.170 [2024-12-06 18:20:19.538722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:54.170 [2024-12-06 18:20:19.551486] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:21:54.170 spare 00:21:54.170 18:20:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.170 18:20:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:21:54.170 [2024-12-06 18:20:19.553983] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:55.107 18:20:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:55.107 18:20:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:55.107 18:20:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:55.107 18:20:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:55.107 18:20:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:55.107 18:20:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.107 18:20:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.107 18:20:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:55.107 18:20:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.107 18:20:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.107 18:20:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:55.107 "name": "raid_bdev1", 00:21:55.107 "uuid": "8fbe43aa-8508-4ca1-9eca-b4818d9f4ef9", 00:21:55.107 "strip_size_kb": 0, 00:21:55.107 "state": "online", 00:21:55.107 "raid_level": "raid1", 00:21:55.107 "superblock": true, 00:21:55.107 "num_base_bdevs": 2, 00:21:55.107 "num_base_bdevs_discovered": 2, 00:21:55.107 "num_base_bdevs_operational": 2, 00:21:55.107 "process": { 00:21:55.107 "type": "rebuild", 00:21:55.107 "target": "spare", 00:21:55.107 "progress": { 00:21:55.107 "blocks": 2560, 00:21:55.107 "percent": 32 00:21:55.107 } 00:21:55.107 }, 00:21:55.107 "base_bdevs_list": [ 00:21:55.107 { 00:21:55.107 "name": "spare", 00:21:55.107 "uuid": "e8be5006-5946-55f2-a980-a43e2be8274e", 00:21:55.107 "is_configured": true, 00:21:55.107 "data_offset": 256, 00:21:55.107 "data_size": 7936 00:21:55.107 }, 00:21:55.107 { 00:21:55.107 "name": "BaseBdev2", 00:21:55.107 "uuid": "89932c46-13a6-52b9-a3df-77e6b5a583e2", 00:21:55.107 "is_configured": true, 00:21:55.107 "data_offset": 256, 00:21:55.107 "data_size": 7936 00:21:55.107 } 00:21:55.107 ] 00:21:55.107 }' 00:21:55.108 18:20:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:55.367 18:20:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:55.367 18:20:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:55.367 18:20:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:55.367 18:20:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:21:55.367 18:20:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.367 18:20:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:55.367 [2024-12-06 18:20:20.728368] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:55.367 [2024-12-06 18:20:20.763003] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:55.367 [2024-12-06 18:20:20.763118] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:55.367 [2024-12-06 18:20:20.763146] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:55.367 [2024-12-06 18:20:20.763172] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:55.367 18:20:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.367 18:20:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:55.367 18:20:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:55.367 18:20:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:55.367 18:20:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:55.367 18:20:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:55.367 18:20:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:55.368 18:20:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:55.368 18:20:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:55.368 18:20:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:55.368 18:20:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:55.368 18:20:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.368 18:20:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.368 18:20:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.368 18:20:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:55.368 18:20:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.368 18:20:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:55.368 "name": "raid_bdev1", 00:21:55.368 "uuid": "8fbe43aa-8508-4ca1-9eca-b4818d9f4ef9", 00:21:55.368 "strip_size_kb": 0, 00:21:55.368 "state": "online", 00:21:55.368 "raid_level": "raid1", 00:21:55.368 "superblock": true, 00:21:55.368 "num_base_bdevs": 2, 00:21:55.368 "num_base_bdevs_discovered": 1, 00:21:55.368 "num_base_bdevs_operational": 1, 00:21:55.368 "base_bdevs_list": [ 00:21:55.368 { 00:21:55.368 "name": null, 00:21:55.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:55.368 "is_configured": false, 00:21:55.368 "data_offset": 0, 00:21:55.368 "data_size": 7936 00:21:55.368 }, 00:21:55.368 { 00:21:55.368 "name": "BaseBdev2", 00:21:55.368 "uuid": "89932c46-13a6-52b9-a3df-77e6b5a583e2", 00:21:55.368 "is_configured": true, 00:21:55.368 "data_offset": 256, 00:21:55.368 "data_size": 7936 00:21:55.368 } 00:21:55.368 ] 00:21:55.368 }' 00:21:55.368 18:20:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:55.368 18:20:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:55.936 18:20:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:55.936 18:20:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:55.936 18:20:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:55.936 18:20:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:55.936 18:20:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:55.936 18:20:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.936 18:20:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.936 18:20:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.936 18:20:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:55.936 18:20:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.936 18:20:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:55.936 "name": "raid_bdev1", 00:21:55.936 "uuid": "8fbe43aa-8508-4ca1-9eca-b4818d9f4ef9", 00:21:55.936 "strip_size_kb": 0, 00:21:55.936 "state": "online", 00:21:55.936 "raid_level": "raid1", 00:21:55.936 "superblock": true, 00:21:55.936 "num_base_bdevs": 2, 00:21:55.936 "num_base_bdevs_discovered": 1, 00:21:55.936 "num_base_bdevs_operational": 1, 00:21:55.936 "base_bdevs_list": [ 00:21:55.936 { 00:21:55.936 "name": null, 00:21:55.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:55.936 "is_configured": false, 00:21:55.936 "data_offset": 0, 00:21:55.936 "data_size": 7936 00:21:55.936 }, 00:21:55.936 { 00:21:55.936 "name": "BaseBdev2", 00:21:55.936 "uuid": "89932c46-13a6-52b9-a3df-77e6b5a583e2", 00:21:55.936 "is_configured": true, 00:21:55.936 "data_offset": 256, 00:21:55.936 "data_size": 7936 00:21:55.936 } 00:21:55.936 ] 00:21:55.936 }' 00:21:55.936 18:20:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:55.936 18:20:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:55.936 18:20:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:56.195 18:20:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:56.195 18:20:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:21:56.195 18:20:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.195 18:20:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:56.195 18:20:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.195 18:20:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:56.195 18:20:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.195 18:20:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:56.195 [2024-12-06 18:20:21.505448] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:56.195 [2024-12-06 18:20:21.505526] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:56.195 [2024-12-06 18:20:21.505558] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:21:56.195 [2024-12-06 18:20:21.505574] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:56.195 [2024-12-06 18:20:21.505864] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:56.195 [2024-12-06 18:20:21.505888] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:56.195 [2024-12-06 18:20:21.505959] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:56.195 [2024-12-06 18:20:21.505980] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:56.195 [2024-12-06 18:20:21.505995] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:56.195 [2024-12-06 18:20:21.506008] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:21:56.195 BaseBdev1 00:21:56.195 18:20:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.195 18:20:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:21:57.132 18:20:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:57.132 18:20:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:57.132 18:20:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:57.132 18:20:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:57.132 18:20:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:57.132 18:20:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:57.132 18:20:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:57.132 18:20:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:57.132 18:20:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:57.132 18:20:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:57.132 18:20:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:57.132 18:20:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:57.132 18:20:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.132 18:20:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:57.132 18:20:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.132 18:20:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:57.132 "name": "raid_bdev1", 00:21:57.132 "uuid": "8fbe43aa-8508-4ca1-9eca-b4818d9f4ef9", 00:21:57.132 "strip_size_kb": 0, 00:21:57.132 "state": "online", 00:21:57.132 "raid_level": "raid1", 00:21:57.132 "superblock": true, 00:21:57.132 "num_base_bdevs": 2, 00:21:57.132 "num_base_bdevs_discovered": 1, 00:21:57.132 "num_base_bdevs_operational": 1, 00:21:57.132 "base_bdevs_list": [ 00:21:57.132 { 00:21:57.132 "name": null, 00:21:57.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.132 "is_configured": false, 00:21:57.132 "data_offset": 0, 00:21:57.132 "data_size": 7936 00:21:57.132 }, 00:21:57.132 { 00:21:57.132 "name": "BaseBdev2", 00:21:57.132 "uuid": "89932c46-13a6-52b9-a3df-77e6b5a583e2", 00:21:57.132 "is_configured": true, 00:21:57.132 "data_offset": 256, 00:21:57.132 "data_size": 7936 00:21:57.132 } 00:21:57.132 ] 00:21:57.132 }' 00:21:57.132 18:20:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:57.132 18:20:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:57.700 18:20:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:57.700 18:20:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:57.700 18:20:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:57.700 18:20:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:57.701 18:20:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:57.701 18:20:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:57.701 18:20:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:57.701 18:20:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.701 18:20:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:57.701 18:20:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.701 18:20:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:57.701 "name": "raid_bdev1", 00:21:57.701 "uuid": "8fbe43aa-8508-4ca1-9eca-b4818d9f4ef9", 00:21:57.701 "strip_size_kb": 0, 00:21:57.701 "state": "online", 00:21:57.701 "raid_level": "raid1", 00:21:57.701 "superblock": true, 00:21:57.701 "num_base_bdevs": 2, 00:21:57.701 "num_base_bdevs_discovered": 1, 00:21:57.701 "num_base_bdevs_operational": 1, 00:21:57.701 "base_bdevs_list": [ 00:21:57.701 { 00:21:57.701 "name": null, 00:21:57.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.701 "is_configured": false, 00:21:57.701 "data_offset": 0, 00:21:57.701 "data_size": 7936 00:21:57.701 }, 00:21:57.701 { 00:21:57.701 "name": "BaseBdev2", 00:21:57.701 "uuid": "89932c46-13a6-52b9-a3df-77e6b5a583e2", 00:21:57.701 "is_configured": true, 00:21:57.701 "data_offset": 256, 00:21:57.701 "data_size": 7936 00:21:57.701 } 00:21:57.701 ] 00:21:57.701 }' 00:21:57.701 18:20:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:57.701 18:20:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:57.701 18:20:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:57.701 18:20:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:57.701 18:20:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:57.701 18:20:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:21:57.701 18:20:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:57.701 18:20:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:57.701 18:20:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:57.701 18:20:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:57.959 18:20:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:57.959 18:20:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:57.959 18:20:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.959 18:20:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:57.959 [2024-12-06 18:20:23.226215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:57.959 [2024-12-06 18:20:23.226592] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:57.959 [2024-12-06 18:20:23.226628] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:57.959 request: 00:21:57.959 { 00:21:57.959 "base_bdev": "BaseBdev1", 00:21:57.959 "raid_bdev": "raid_bdev1", 00:21:57.959 "method": "bdev_raid_add_base_bdev", 00:21:57.959 "req_id": 1 00:21:57.959 } 00:21:57.959 Got JSON-RPC error response 00:21:57.959 response: 00:21:57.959 { 00:21:57.959 "code": -22, 00:21:57.959 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:57.959 } 00:21:57.959 18:20:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:57.959 18:20:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:21:57.959 18:20:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:57.959 18:20:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:57.959 18:20:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:57.959 18:20:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:21:58.894 18:20:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:58.894 18:20:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:58.894 18:20:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:58.894 18:20:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:58.894 18:20:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:58.894 18:20:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:58.894 18:20:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:58.894 18:20:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:58.894 18:20:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:58.894 18:20:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:58.894 18:20:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:58.894 18:20:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:58.894 18:20:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.895 18:20:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:58.895 18:20:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.895 18:20:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:58.895 "name": "raid_bdev1", 00:21:58.895 "uuid": "8fbe43aa-8508-4ca1-9eca-b4818d9f4ef9", 00:21:58.895 "strip_size_kb": 0, 00:21:58.895 "state": "online", 00:21:58.895 "raid_level": "raid1", 00:21:58.895 "superblock": true, 00:21:58.895 "num_base_bdevs": 2, 00:21:58.895 "num_base_bdevs_discovered": 1, 00:21:58.895 "num_base_bdevs_operational": 1, 00:21:58.895 "base_bdevs_list": [ 00:21:58.895 { 00:21:58.895 "name": null, 00:21:58.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.895 "is_configured": false, 00:21:58.895 "data_offset": 0, 00:21:58.895 "data_size": 7936 00:21:58.895 }, 00:21:58.895 { 00:21:58.895 "name": "BaseBdev2", 00:21:58.895 "uuid": "89932c46-13a6-52b9-a3df-77e6b5a583e2", 00:21:58.895 "is_configured": true, 00:21:58.895 "data_offset": 256, 00:21:58.895 "data_size": 7936 00:21:58.895 } 00:21:58.895 ] 00:21:58.895 }' 00:21:58.895 18:20:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:58.895 18:20:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:59.461 18:20:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:59.461 18:20:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:59.461 18:20:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:59.461 18:20:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:59.461 18:20:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:59.461 18:20:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:59.461 18:20:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:59.461 18:20:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.461 18:20:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:59.461 18:20:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.461 18:20:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:59.461 "name": "raid_bdev1", 00:21:59.461 "uuid": "8fbe43aa-8508-4ca1-9eca-b4818d9f4ef9", 00:21:59.461 "strip_size_kb": 0, 00:21:59.461 "state": "online", 00:21:59.461 "raid_level": "raid1", 00:21:59.461 "superblock": true, 00:21:59.461 "num_base_bdevs": 2, 00:21:59.461 "num_base_bdevs_discovered": 1, 00:21:59.461 "num_base_bdevs_operational": 1, 00:21:59.461 "base_bdevs_list": [ 00:21:59.461 { 00:21:59.461 "name": null, 00:21:59.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.461 "is_configured": false, 00:21:59.461 "data_offset": 0, 00:21:59.461 "data_size": 7936 00:21:59.461 }, 00:21:59.461 { 00:21:59.461 "name": "BaseBdev2", 00:21:59.461 "uuid": "89932c46-13a6-52b9-a3df-77e6b5a583e2", 00:21:59.461 "is_configured": true, 00:21:59.461 "data_offset": 256, 00:21:59.461 "data_size": 7936 00:21:59.461 } 00:21:59.461 ] 00:21:59.461 }' 00:21:59.461 18:20:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:59.461 18:20:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:59.461 18:20:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:59.461 18:20:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:59.461 18:20:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88268 00:21:59.461 18:20:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88268 ']' 00:21:59.461 18:20:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 88268 00:21:59.461 18:20:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:21:59.461 18:20:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:59.461 18:20:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88268 00:21:59.720 killing process with pid 88268 00:21:59.720 Received shutdown signal, test time was about 60.000000 seconds 00:21:59.720 00:21:59.720 Latency(us) 00:21:59.720 [2024-12-06T18:20:25.240Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:59.720 [2024-12-06T18:20:25.240Z] =================================================================================================================== 00:21:59.720 [2024-12-06T18:20:25.240Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:59.720 18:20:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:59.720 18:20:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:59.720 18:20:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88268' 00:21:59.720 18:20:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 88268 00:21:59.720 18:20:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 88268 00:21:59.720 [2024-12-06 18:20:24.982167] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:59.720 [2024-12-06 18:20:24.982399] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:59.720 [2024-12-06 18:20:24.982475] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:59.720 [2024-12-06 18:20:24.982503] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:21:59.978 [2024-12-06 18:20:25.306678] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:01.349 18:20:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:22:01.349 00:22:01.349 real 0m22.040s 00:22:01.349 user 0m29.919s 00:22:01.349 sys 0m2.558s 00:22:01.349 18:20:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:01.349 18:20:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:01.349 ************************************ 00:22:01.349 END TEST raid_rebuild_test_sb_md_separate 00:22:01.349 ************************************ 00:22:01.349 18:20:26 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:22:01.349 18:20:26 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:22:01.349 18:20:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:01.349 18:20:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:01.349 18:20:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:01.349 ************************************ 00:22:01.349 START TEST raid_state_function_test_sb_md_interleaved 00:22:01.349 ************************************ 00:22:01.349 18:20:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:22:01.349 18:20:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:22:01.349 18:20:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:22:01.349 18:20:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:22:01.349 18:20:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:22:01.349 18:20:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:22:01.349 18:20:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:01.349 18:20:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:22:01.349 18:20:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:01.349 18:20:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:01.349 18:20:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:22:01.349 18:20:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:01.349 18:20:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:01.349 18:20:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:01.349 18:20:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:22:01.349 18:20:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:22:01.349 18:20:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:22:01.349 18:20:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:22:01.349 18:20:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:22:01.349 18:20:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:22:01.349 18:20:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:22:01.349 18:20:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:22:01.349 18:20:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:22:01.349 18:20:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88971 00:22:01.349 18:20:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:22:01.349 Process raid pid: 88971 00:22:01.349 18:20:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88971' 00:22:01.349 18:20:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88971 00:22:01.349 18:20:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88971 ']' 00:22:01.349 18:20:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:01.349 18:20:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:01.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:01.349 18:20:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:01.349 18:20:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:01.349 18:20:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:01.349 [2024-12-06 18:20:26.694043] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:22:01.349 [2024-12-06 18:20:26.694337] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:01.607 [2024-12-06 18:20:26.892332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.607 [2024-12-06 18:20:27.089154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.864 [2024-12-06 18:20:27.350912] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:01.864 [2024-12-06 18:20:27.350989] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:02.429 18:20:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:02.429 18:20:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:22:02.429 18:20:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:02.429 18:20:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.429 18:20:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:02.429 [2024-12-06 18:20:27.719444] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:02.429 [2024-12-06 18:20:27.719678] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:02.429 [2024-12-06 18:20:27.719842] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:02.429 [2024-12-06 18:20:27.719909] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:02.429 18:20:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.429 18:20:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:02.429 18:20:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:02.429 18:20:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:02.429 18:20:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:02.429 18:20:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:02.429 18:20:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:02.429 18:20:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:02.429 18:20:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:02.429 18:20:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:02.429 18:20:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:02.429 18:20:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:02.429 18:20:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:02.429 18:20:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.429 18:20:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:02.429 18:20:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.429 18:20:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:02.429 "name": "Existed_Raid", 00:22:02.429 "uuid": "012cf155-bc62-44da-9b50-96a77a3e7962", 00:22:02.429 "strip_size_kb": 0, 00:22:02.429 "state": "configuring", 00:22:02.429 "raid_level": "raid1", 00:22:02.429 "superblock": true, 00:22:02.429 "num_base_bdevs": 2, 00:22:02.429 "num_base_bdevs_discovered": 0, 00:22:02.429 "num_base_bdevs_operational": 2, 00:22:02.429 "base_bdevs_list": [ 00:22:02.429 { 00:22:02.429 "name": "BaseBdev1", 00:22:02.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.429 "is_configured": false, 00:22:02.429 "data_offset": 0, 00:22:02.429 "data_size": 0 00:22:02.429 }, 00:22:02.429 { 00:22:02.429 "name": "BaseBdev2", 00:22:02.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.429 "is_configured": false, 00:22:02.429 "data_offset": 0, 00:22:02.429 "data_size": 0 00:22:02.429 } 00:22:02.429 ] 00:22:02.429 }' 00:22:02.429 18:20:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:02.429 18:20:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:02.997 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:02.997 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.997 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:02.997 [2024-12-06 18:20:28.291621] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:02.997 [2024-12-06 18:20:28.291668] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:22:02.997 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.997 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:02.997 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.997 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:02.997 [2024-12-06 18:20:28.299587] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:02.997 [2024-12-06 18:20:28.299828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:02.997 [2024-12-06 18:20:28.299974] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:02.997 [2024-12-06 18:20:28.300040] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:02.997 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.997 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:22:02.998 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.998 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:02.998 [2024-12-06 18:20:28.347879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:02.998 BaseBdev1 00:22:02.998 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.998 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:22:02.998 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:22:02.998 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:02.998 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:22:02.998 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:02.998 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:02.998 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:02.998 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.998 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:02.998 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.998 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:02.998 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.998 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:02.998 [ 00:22:02.998 { 00:22:02.998 "name": "BaseBdev1", 00:22:02.998 "aliases": [ 00:22:02.998 "e8a7d331-20f0-4087-8888-847a4e249f47" 00:22:02.998 ], 00:22:02.998 "product_name": "Malloc disk", 00:22:02.998 "block_size": 4128, 00:22:02.998 "num_blocks": 8192, 00:22:02.998 "uuid": "e8a7d331-20f0-4087-8888-847a4e249f47", 00:22:02.998 "md_size": 32, 00:22:02.998 "md_interleave": true, 00:22:02.998 "dif_type": 0, 00:22:02.998 "assigned_rate_limits": { 00:22:02.998 "rw_ios_per_sec": 0, 00:22:02.998 "rw_mbytes_per_sec": 0, 00:22:02.998 "r_mbytes_per_sec": 0, 00:22:02.998 "w_mbytes_per_sec": 0 00:22:02.998 }, 00:22:02.998 "claimed": true, 00:22:02.998 "claim_type": "exclusive_write", 00:22:02.998 "zoned": false, 00:22:02.998 "supported_io_types": { 00:22:02.998 "read": true, 00:22:02.998 "write": true, 00:22:02.998 "unmap": true, 00:22:02.998 "flush": true, 00:22:02.998 "reset": true, 00:22:02.998 "nvme_admin": false, 00:22:02.998 "nvme_io": false, 00:22:02.998 "nvme_io_md": false, 00:22:02.998 "write_zeroes": true, 00:22:02.998 "zcopy": true, 00:22:02.998 "get_zone_info": false, 00:22:02.998 "zone_management": false, 00:22:02.998 "zone_append": false, 00:22:02.998 "compare": false, 00:22:02.998 "compare_and_write": false, 00:22:02.998 "abort": true, 00:22:02.998 "seek_hole": false, 00:22:02.998 "seek_data": false, 00:22:02.998 "copy": true, 00:22:02.998 "nvme_iov_md": false 00:22:02.998 }, 00:22:02.998 "memory_domains": [ 00:22:02.998 { 00:22:02.998 "dma_device_id": "system", 00:22:02.998 "dma_device_type": 1 00:22:02.998 }, 00:22:02.998 { 00:22:02.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:02.998 "dma_device_type": 2 00:22:02.998 } 00:22:02.998 ], 00:22:02.998 "driver_specific": {} 00:22:02.998 } 00:22:02.998 ] 00:22:02.998 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.998 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:22:02.998 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:02.998 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:02.998 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:02.998 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:02.998 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:02.998 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:02.998 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:02.998 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:02.998 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:02.998 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:02.998 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:02.998 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.998 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:02.998 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:02.998 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.998 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:02.998 "name": "Existed_Raid", 00:22:02.998 "uuid": "db610789-b3e8-48d1-9c75-c4cf5f34c650", 00:22:02.998 "strip_size_kb": 0, 00:22:02.998 "state": "configuring", 00:22:02.998 "raid_level": "raid1", 00:22:02.998 "superblock": true, 00:22:02.998 "num_base_bdevs": 2, 00:22:02.998 "num_base_bdevs_discovered": 1, 00:22:02.998 "num_base_bdevs_operational": 2, 00:22:02.998 "base_bdevs_list": [ 00:22:02.998 { 00:22:02.998 "name": "BaseBdev1", 00:22:02.998 "uuid": "e8a7d331-20f0-4087-8888-847a4e249f47", 00:22:02.998 "is_configured": true, 00:22:02.998 "data_offset": 256, 00:22:02.998 "data_size": 7936 00:22:02.998 }, 00:22:02.998 { 00:22:02.998 "name": "BaseBdev2", 00:22:02.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.998 "is_configured": false, 00:22:02.998 "data_offset": 0, 00:22:02.998 "data_size": 0 00:22:02.998 } 00:22:02.998 ] 00:22:02.998 }' 00:22:02.998 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:02.998 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:03.566 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:03.567 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.567 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:03.567 [2024-12-06 18:20:28.912126] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:03.567 [2024-12-06 18:20:28.912213] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:22:03.567 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.567 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:03.567 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.567 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:03.567 [2024-12-06 18:20:28.920185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:03.567 [2024-12-06 18:20:28.924175] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:03.567 [2024-12-06 18:20:28.924238] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:03.567 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.567 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:22:03.567 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:03.567 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:03.567 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:03.567 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:03.567 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:03.567 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:03.567 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:03.567 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:03.567 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:03.567 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:03.567 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:03.567 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:03.567 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:03.567 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.567 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:03.567 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.567 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:03.567 "name": "Existed_Raid", 00:22:03.567 "uuid": "3708188e-979f-47d9-a267-61cd6a70c78f", 00:22:03.567 "strip_size_kb": 0, 00:22:03.567 "state": "configuring", 00:22:03.567 "raid_level": "raid1", 00:22:03.567 "superblock": true, 00:22:03.567 "num_base_bdevs": 2, 00:22:03.567 "num_base_bdevs_discovered": 1, 00:22:03.567 "num_base_bdevs_operational": 2, 00:22:03.567 "base_bdevs_list": [ 00:22:03.567 { 00:22:03.567 "name": "BaseBdev1", 00:22:03.567 "uuid": "e8a7d331-20f0-4087-8888-847a4e249f47", 00:22:03.567 "is_configured": true, 00:22:03.567 "data_offset": 256, 00:22:03.567 "data_size": 7936 00:22:03.567 }, 00:22:03.567 { 00:22:03.567 "name": "BaseBdev2", 00:22:03.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.567 "is_configured": false, 00:22:03.567 "data_offset": 0, 00:22:03.567 "data_size": 0 00:22:03.567 } 00:22:03.567 ] 00:22:03.567 }' 00:22:03.567 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:03.567 18:20:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:04.136 18:20:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:22:04.136 18:20:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.136 18:20:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:04.136 [2024-12-06 18:20:29.540228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:04.136 BaseBdev2 00:22:04.136 [2024-12-06 18:20:29.540849] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:04.136 [2024-12-06 18:20:29.540876] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:04.136 [2024-12-06 18:20:29.540982] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:04.136 [2024-12-06 18:20:29.541096] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:04.136 [2024-12-06 18:20:29.541116] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:22:04.136 [2024-12-06 18:20:29.541226] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:04.136 18:20:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.136 18:20:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:22:04.136 18:20:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:22:04.136 18:20:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:04.137 18:20:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:22:04.137 18:20:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:04.137 18:20:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:04.137 18:20:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:04.137 18:20:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.137 18:20:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:04.137 18:20:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.137 18:20:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:04.137 18:20:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.137 18:20:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:04.137 [ 00:22:04.137 { 00:22:04.137 "name": "BaseBdev2", 00:22:04.137 "aliases": [ 00:22:04.137 "29a80d0d-2127-4cb6-b373-b4d8db7aa2d3" 00:22:04.137 ], 00:22:04.137 "product_name": "Malloc disk", 00:22:04.137 "block_size": 4128, 00:22:04.137 "num_blocks": 8192, 00:22:04.137 "uuid": "29a80d0d-2127-4cb6-b373-b4d8db7aa2d3", 00:22:04.137 "md_size": 32, 00:22:04.137 "md_interleave": true, 00:22:04.137 "dif_type": 0, 00:22:04.137 "assigned_rate_limits": { 00:22:04.137 "rw_ios_per_sec": 0, 00:22:04.137 "rw_mbytes_per_sec": 0, 00:22:04.137 "r_mbytes_per_sec": 0, 00:22:04.137 "w_mbytes_per_sec": 0 00:22:04.137 }, 00:22:04.137 "claimed": true, 00:22:04.137 "claim_type": "exclusive_write", 00:22:04.137 "zoned": false, 00:22:04.137 "supported_io_types": { 00:22:04.137 "read": true, 00:22:04.137 "write": true, 00:22:04.137 "unmap": true, 00:22:04.137 "flush": true, 00:22:04.137 "reset": true, 00:22:04.137 "nvme_admin": false, 00:22:04.137 "nvme_io": false, 00:22:04.137 "nvme_io_md": false, 00:22:04.137 "write_zeroes": true, 00:22:04.137 "zcopy": true, 00:22:04.137 "get_zone_info": false, 00:22:04.137 "zone_management": false, 00:22:04.137 "zone_append": false, 00:22:04.137 "compare": false, 00:22:04.137 "compare_and_write": false, 00:22:04.137 "abort": true, 00:22:04.137 "seek_hole": false, 00:22:04.137 "seek_data": false, 00:22:04.137 "copy": true, 00:22:04.137 "nvme_iov_md": false 00:22:04.137 }, 00:22:04.137 "memory_domains": [ 00:22:04.137 { 00:22:04.137 "dma_device_id": "system", 00:22:04.137 "dma_device_type": 1 00:22:04.137 }, 00:22:04.137 { 00:22:04.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:04.137 "dma_device_type": 2 00:22:04.137 } 00:22:04.137 ], 00:22:04.137 "driver_specific": {} 00:22:04.137 } 00:22:04.137 ] 00:22:04.137 18:20:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.137 18:20:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:22:04.137 18:20:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:04.137 18:20:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:04.137 18:20:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:22:04.137 18:20:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:04.137 18:20:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:04.137 18:20:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:04.137 18:20:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:04.137 18:20:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:04.137 18:20:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:04.137 18:20:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:04.137 18:20:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:04.137 18:20:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:04.137 18:20:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:04.137 18:20:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:04.137 18:20:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.137 18:20:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:04.137 18:20:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.137 18:20:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:04.137 "name": "Existed_Raid", 00:22:04.137 "uuid": "3708188e-979f-47d9-a267-61cd6a70c78f", 00:22:04.137 "strip_size_kb": 0, 00:22:04.137 "state": "online", 00:22:04.137 "raid_level": "raid1", 00:22:04.137 "superblock": true, 00:22:04.137 "num_base_bdevs": 2, 00:22:04.137 "num_base_bdevs_discovered": 2, 00:22:04.137 "num_base_bdevs_operational": 2, 00:22:04.137 "base_bdevs_list": [ 00:22:04.137 { 00:22:04.137 "name": "BaseBdev1", 00:22:04.137 "uuid": "e8a7d331-20f0-4087-8888-847a4e249f47", 00:22:04.137 "is_configured": true, 00:22:04.137 "data_offset": 256, 00:22:04.137 "data_size": 7936 00:22:04.137 }, 00:22:04.137 { 00:22:04.137 "name": "BaseBdev2", 00:22:04.137 "uuid": "29a80d0d-2127-4cb6-b373-b4d8db7aa2d3", 00:22:04.137 "is_configured": true, 00:22:04.137 "data_offset": 256, 00:22:04.137 "data_size": 7936 00:22:04.137 } 00:22:04.137 ] 00:22:04.137 }' 00:22:04.137 18:20:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:04.137 18:20:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:04.704 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:22:04.704 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:04.704 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:04.704 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:04.704 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:22:04.704 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:04.704 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:04.704 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:04.704 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.704 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:04.704 [2024-12-06 18:20:30.132859] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:04.704 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.704 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:04.704 "name": "Existed_Raid", 00:22:04.704 "aliases": [ 00:22:04.704 "3708188e-979f-47d9-a267-61cd6a70c78f" 00:22:04.704 ], 00:22:04.704 "product_name": "Raid Volume", 00:22:04.704 "block_size": 4128, 00:22:04.704 "num_blocks": 7936, 00:22:04.704 "uuid": "3708188e-979f-47d9-a267-61cd6a70c78f", 00:22:04.704 "md_size": 32, 00:22:04.704 "md_interleave": true, 00:22:04.704 "dif_type": 0, 00:22:04.704 "assigned_rate_limits": { 00:22:04.704 "rw_ios_per_sec": 0, 00:22:04.704 "rw_mbytes_per_sec": 0, 00:22:04.704 "r_mbytes_per_sec": 0, 00:22:04.704 "w_mbytes_per_sec": 0 00:22:04.704 }, 00:22:04.704 "claimed": false, 00:22:04.704 "zoned": false, 00:22:04.704 "supported_io_types": { 00:22:04.704 "read": true, 00:22:04.704 "write": true, 00:22:04.704 "unmap": false, 00:22:04.704 "flush": false, 00:22:04.704 "reset": true, 00:22:04.704 "nvme_admin": false, 00:22:04.704 "nvme_io": false, 00:22:04.704 "nvme_io_md": false, 00:22:04.704 "write_zeroes": true, 00:22:04.704 "zcopy": false, 00:22:04.704 "get_zone_info": false, 00:22:04.704 "zone_management": false, 00:22:04.704 "zone_append": false, 00:22:04.704 "compare": false, 00:22:04.704 "compare_and_write": false, 00:22:04.704 "abort": false, 00:22:04.704 "seek_hole": false, 00:22:04.704 "seek_data": false, 00:22:04.704 "copy": false, 00:22:04.704 "nvme_iov_md": false 00:22:04.704 }, 00:22:04.704 "memory_domains": [ 00:22:04.704 { 00:22:04.704 "dma_device_id": "system", 00:22:04.704 "dma_device_type": 1 00:22:04.704 }, 00:22:04.704 { 00:22:04.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:04.704 "dma_device_type": 2 00:22:04.704 }, 00:22:04.704 { 00:22:04.704 "dma_device_id": "system", 00:22:04.704 "dma_device_type": 1 00:22:04.704 }, 00:22:04.704 { 00:22:04.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:04.704 "dma_device_type": 2 00:22:04.704 } 00:22:04.704 ], 00:22:04.704 "driver_specific": { 00:22:04.704 "raid": { 00:22:04.704 "uuid": "3708188e-979f-47d9-a267-61cd6a70c78f", 00:22:04.704 "strip_size_kb": 0, 00:22:04.704 "state": "online", 00:22:04.704 "raid_level": "raid1", 00:22:04.704 "superblock": true, 00:22:04.705 "num_base_bdevs": 2, 00:22:04.705 "num_base_bdevs_discovered": 2, 00:22:04.705 "num_base_bdevs_operational": 2, 00:22:04.705 "base_bdevs_list": [ 00:22:04.705 { 00:22:04.705 "name": "BaseBdev1", 00:22:04.705 "uuid": "e8a7d331-20f0-4087-8888-847a4e249f47", 00:22:04.705 "is_configured": true, 00:22:04.705 "data_offset": 256, 00:22:04.705 "data_size": 7936 00:22:04.705 }, 00:22:04.705 { 00:22:04.705 "name": "BaseBdev2", 00:22:04.705 "uuid": "29a80d0d-2127-4cb6-b373-b4d8db7aa2d3", 00:22:04.705 "is_configured": true, 00:22:04.705 "data_offset": 256, 00:22:04.705 "data_size": 7936 00:22:04.705 } 00:22:04.705 ] 00:22:04.705 } 00:22:04.705 } 00:22:04.705 }' 00:22:04.705 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:04.963 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:22:04.963 BaseBdev2' 00:22:04.963 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:04.963 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:22:04.963 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:04.963 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:22:04.963 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.963 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:04.963 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:04.963 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.963 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:22:04.963 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:22:04.963 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:04.963 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:04.963 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:04.963 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.964 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:04.964 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.964 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:22:04.964 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:22:04.964 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:04.964 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.964 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:04.964 [2024-12-06 18:20:30.388802] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:05.222 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.222 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:22:05.222 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:22:05.222 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:05.222 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:22:05.222 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:22:05.222 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:22:05.222 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:05.222 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:05.222 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:05.222 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:05.222 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:05.222 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:05.222 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:05.222 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:05.222 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:05.222 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:05.222 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:05.222 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.222 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:05.222 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.222 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:05.222 "name": "Existed_Raid", 00:22:05.222 "uuid": "3708188e-979f-47d9-a267-61cd6a70c78f", 00:22:05.222 "strip_size_kb": 0, 00:22:05.222 "state": "online", 00:22:05.222 "raid_level": "raid1", 00:22:05.222 "superblock": true, 00:22:05.222 "num_base_bdevs": 2, 00:22:05.222 "num_base_bdevs_discovered": 1, 00:22:05.222 "num_base_bdevs_operational": 1, 00:22:05.222 "base_bdevs_list": [ 00:22:05.222 { 00:22:05.222 "name": null, 00:22:05.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.222 "is_configured": false, 00:22:05.222 "data_offset": 0, 00:22:05.222 "data_size": 7936 00:22:05.222 }, 00:22:05.222 { 00:22:05.222 "name": "BaseBdev2", 00:22:05.222 "uuid": "29a80d0d-2127-4cb6-b373-b4d8db7aa2d3", 00:22:05.222 "is_configured": true, 00:22:05.222 "data_offset": 256, 00:22:05.222 "data_size": 7936 00:22:05.222 } 00:22:05.222 ] 00:22:05.222 }' 00:22:05.222 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:05.222 18:20:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:05.790 18:20:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:22:05.790 18:20:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:05.790 18:20:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:05.791 18:20:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.791 18:20:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:05.791 18:20:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:05.791 18:20:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.791 18:20:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:05.791 18:20:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:05.791 18:20:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:22:05.791 18:20:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.791 18:20:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:05.791 [2024-12-06 18:20:31.086768] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:05.791 [2024-12-06 18:20:31.087305] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:05.791 [2024-12-06 18:20:31.184275] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:05.791 [2024-12-06 18:20:31.184357] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:05.791 [2024-12-06 18:20:31.184377] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:22:05.791 18:20:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.791 18:20:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:05.791 18:20:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:05.791 18:20:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:05.791 18:20:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.791 18:20:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:05.791 18:20:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:22:05.791 18:20:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.791 18:20:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:22:05.791 18:20:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:22:05.791 18:20:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:22:05.791 18:20:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88971 00:22:05.791 18:20:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88971 ']' 00:22:05.791 18:20:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88971 00:22:05.791 18:20:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:22:05.791 18:20:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:05.791 18:20:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88971 00:22:05.791 killing process with pid 88971 00:22:05.791 18:20:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:05.791 18:20:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:05.791 18:20:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88971' 00:22:05.791 18:20:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88971 00:22:05.791 [2024-12-06 18:20:31.271630] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:05.791 18:20:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88971 00:22:05.791 [2024-12-06 18:20:31.287632] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:07.168 ************************************ 00:22:07.168 END TEST raid_state_function_test_sb_md_interleaved 00:22:07.168 ************************************ 00:22:07.168 18:20:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:22:07.168 00:22:07.168 real 0m5.884s 00:22:07.168 user 0m8.796s 00:22:07.168 sys 0m0.873s 00:22:07.168 18:20:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:07.168 18:20:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:07.168 18:20:32 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:22:07.168 18:20:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:07.168 18:20:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:07.168 18:20:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:07.168 ************************************ 00:22:07.168 START TEST raid_superblock_test_md_interleaved 00:22:07.168 ************************************ 00:22:07.168 18:20:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:22:07.168 18:20:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:22:07.168 18:20:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:22:07.168 18:20:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:22:07.168 18:20:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:22:07.168 18:20:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:22:07.168 18:20:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:22:07.168 18:20:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:22:07.168 18:20:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:22:07.168 18:20:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:22:07.168 18:20:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:22:07.168 18:20:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:22:07.168 18:20:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:22:07.168 18:20:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:22:07.168 18:20:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:22:07.168 18:20:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:22:07.168 18:20:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89228 00:22:07.168 18:20:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89228 00:22:07.168 18:20:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:22:07.168 18:20:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89228 ']' 00:22:07.168 18:20:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:07.168 18:20:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:07.169 18:20:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:07.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:07.169 18:20:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:07.169 18:20:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:07.169 [2024-12-06 18:20:32.632705] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:22:07.169 [2024-12-06 18:20:32.633227] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89228 ] 00:22:07.427 [2024-12-06 18:20:32.811513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.687 [2024-12-06 18:20:32.962187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.687 [2024-12-06 18:20:33.196283] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:07.687 [2024-12-06 18:20:33.196352] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:08.255 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:08.255 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:22:08.255 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:22:08.255 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:08.255 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:22:08.255 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:22:08.255 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:08.255 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:08.255 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:08.255 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:08.255 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:22:08.255 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.255 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:08.255 malloc1 00:22:08.255 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.255 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:08.255 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.255 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:08.255 [2024-12-06 18:20:33.689118] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:08.255 [2024-12-06 18:20:33.689211] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:08.255 [2024-12-06 18:20:33.689246] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:08.255 [2024-12-06 18:20:33.689262] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:08.255 [2024-12-06 18:20:33.692364] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:08.255 pt1 00:22:08.255 [2024-12-06 18:20:33.692573] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:08.255 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.255 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:08.255 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:08.255 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:22:08.255 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:22:08.255 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:08.255 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:08.255 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:08.255 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:08.255 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:22:08.255 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.255 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:08.255 malloc2 00:22:08.255 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.255 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:08.255 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.255 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:08.255 [2024-12-06 18:20:33.750675] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:08.255 [2024-12-06 18:20:33.751028] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:08.255 [2024-12-06 18:20:33.751073] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:08.255 [2024-12-06 18:20:33.751090] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:08.255 [2024-12-06 18:20:33.753736] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:08.255 [2024-12-06 18:20:33.753802] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:08.255 pt2 00:22:08.255 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.255 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:08.255 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:08.255 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:22:08.255 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.255 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:08.255 [2024-12-06 18:20:33.758700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:08.255 [2024-12-06 18:20:33.761580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:08.255 [2024-12-06 18:20:33.762037] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:08.255 [2024-12-06 18:20:33.762169] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:08.255 [2024-12-06 18:20:33.762320] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:08.255 [2024-12-06 18:20:33.762707] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:08.255 [2024-12-06 18:20:33.762737] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:08.255 [2024-12-06 18:20:33.762912] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:08.255 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.255 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:08.255 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:08.255 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:08.255 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:08.255 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:08.255 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:08.255 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:08.255 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:08.256 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:08.256 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:08.256 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.256 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:08.256 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.256 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:08.514 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.514 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:08.514 "name": "raid_bdev1", 00:22:08.514 "uuid": "79d73a70-cbcd-4605-86c2-0fd3fc74eadc", 00:22:08.514 "strip_size_kb": 0, 00:22:08.514 "state": "online", 00:22:08.514 "raid_level": "raid1", 00:22:08.514 "superblock": true, 00:22:08.514 "num_base_bdevs": 2, 00:22:08.514 "num_base_bdevs_discovered": 2, 00:22:08.514 "num_base_bdevs_operational": 2, 00:22:08.514 "base_bdevs_list": [ 00:22:08.514 { 00:22:08.514 "name": "pt1", 00:22:08.514 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:08.514 "is_configured": true, 00:22:08.514 "data_offset": 256, 00:22:08.514 "data_size": 7936 00:22:08.514 }, 00:22:08.514 { 00:22:08.514 "name": "pt2", 00:22:08.514 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:08.514 "is_configured": true, 00:22:08.514 "data_offset": 256, 00:22:08.514 "data_size": 7936 00:22:08.514 } 00:22:08.514 ] 00:22:08.514 }' 00:22:08.514 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:08.514 18:20:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:08.773 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:22:08.773 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:08.773 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:08.773 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:08.773 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:22:08.773 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:08.773 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:08.773 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:08.773 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.773 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:09.032 [2024-12-06 18:20:34.291657] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:09.032 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.032 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:09.032 "name": "raid_bdev1", 00:22:09.032 "aliases": [ 00:22:09.032 "79d73a70-cbcd-4605-86c2-0fd3fc74eadc" 00:22:09.032 ], 00:22:09.032 "product_name": "Raid Volume", 00:22:09.032 "block_size": 4128, 00:22:09.032 "num_blocks": 7936, 00:22:09.032 "uuid": "79d73a70-cbcd-4605-86c2-0fd3fc74eadc", 00:22:09.032 "md_size": 32, 00:22:09.032 "md_interleave": true, 00:22:09.032 "dif_type": 0, 00:22:09.032 "assigned_rate_limits": { 00:22:09.032 "rw_ios_per_sec": 0, 00:22:09.032 "rw_mbytes_per_sec": 0, 00:22:09.032 "r_mbytes_per_sec": 0, 00:22:09.032 "w_mbytes_per_sec": 0 00:22:09.032 }, 00:22:09.032 "claimed": false, 00:22:09.032 "zoned": false, 00:22:09.032 "supported_io_types": { 00:22:09.032 "read": true, 00:22:09.032 "write": true, 00:22:09.032 "unmap": false, 00:22:09.032 "flush": false, 00:22:09.032 "reset": true, 00:22:09.032 "nvme_admin": false, 00:22:09.032 "nvme_io": false, 00:22:09.032 "nvme_io_md": false, 00:22:09.032 "write_zeroes": true, 00:22:09.032 "zcopy": false, 00:22:09.032 "get_zone_info": false, 00:22:09.032 "zone_management": false, 00:22:09.032 "zone_append": false, 00:22:09.032 "compare": false, 00:22:09.032 "compare_and_write": false, 00:22:09.032 "abort": false, 00:22:09.032 "seek_hole": false, 00:22:09.032 "seek_data": false, 00:22:09.032 "copy": false, 00:22:09.032 "nvme_iov_md": false 00:22:09.032 }, 00:22:09.032 "memory_domains": [ 00:22:09.032 { 00:22:09.032 "dma_device_id": "system", 00:22:09.032 "dma_device_type": 1 00:22:09.032 }, 00:22:09.032 { 00:22:09.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:09.032 "dma_device_type": 2 00:22:09.032 }, 00:22:09.032 { 00:22:09.032 "dma_device_id": "system", 00:22:09.032 "dma_device_type": 1 00:22:09.032 }, 00:22:09.032 { 00:22:09.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:09.032 "dma_device_type": 2 00:22:09.032 } 00:22:09.032 ], 00:22:09.032 "driver_specific": { 00:22:09.032 "raid": { 00:22:09.032 "uuid": "79d73a70-cbcd-4605-86c2-0fd3fc74eadc", 00:22:09.032 "strip_size_kb": 0, 00:22:09.032 "state": "online", 00:22:09.032 "raid_level": "raid1", 00:22:09.032 "superblock": true, 00:22:09.032 "num_base_bdevs": 2, 00:22:09.032 "num_base_bdevs_discovered": 2, 00:22:09.032 "num_base_bdevs_operational": 2, 00:22:09.032 "base_bdevs_list": [ 00:22:09.032 { 00:22:09.032 "name": "pt1", 00:22:09.032 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:09.032 "is_configured": true, 00:22:09.032 "data_offset": 256, 00:22:09.032 "data_size": 7936 00:22:09.032 }, 00:22:09.032 { 00:22:09.032 "name": "pt2", 00:22:09.032 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:09.032 "is_configured": true, 00:22:09.032 "data_offset": 256, 00:22:09.032 "data_size": 7936 00:22:09.032 } 00:22:09.032 ] 00:22:09.032 } 00:22:09.032 } 00:22:09.032 }' 00:22:09.032 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:09.033 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:09.033 pt2' 00:22:09.033 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:09.033 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:22:09.033 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:09.033 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:09.033 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:09.033 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.033 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:09.033 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.033 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:22:09.033 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:22:09.033 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:09.033 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:09.033 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.033 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:09.033 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:09.033 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.033 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:22:09.033 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:09.292 [2024-12-06 18:20:34.559651] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=79d73a70-cbcd-4605-86c2-0fd3fc74eadc 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 79d73a70-cbcd-4605-86c2-0fd3fc74eadc ']' 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:09.292 [2024-12-06 18:20:34.611204] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:09.292 [2024-12-06 18:20:34.611355] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:09.292 [2024-12-06 18:20:34.611594] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:09.292 [2024-12-06 18:20:34.611801] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:09.292 [2024-12-06 18:20:34.611836] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:09.292 [2024-12-06 18:20:34.747281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:09.292 [2024-12-06 18:20:34.750102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:09.292 [2024-12-06 18:20:34.750208] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:09.292 [2024-12-06 18:20:34.750293] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:09.292 [2024-12-06 18:20:34.750319] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:09.292 [2024-12-06 18:20:34.750335] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:22:09.292 request: 00:22:09.292 { 00:22:09.292 "name": "raid_bdev1", 00:22:09.292 "raid_level": "raid1", 00:22:09.292 "base_bdevs": [ 00:22:09.292 "malloc1", 00:22:09.292 "malloc2" 00:22:09.292 ], 00:22:09.292 "superblock": false, 00:22:09.292 "method": "bdev_raid_create", 00:22:09.292 "req_id": 1 00:22:09.292 } 00:22:09.292 Got JSON-RPC error response 00:22:09.292 response: 00:22:09.292 { 00:22:09.292 "code": -17, 00:22:09.292 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:09.292 } 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.292 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:09.551 [2024-12-06 18:20:34.815375] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:09.551 [2024-12-06 18:20:34.815580] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:09.551 [2024-12-06 18:20:34.815653] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:09.551 [2024-12-06 18:20:34.815849] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:09.551 [2024-12-06 18:20:34.818566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:09.551 [2024-12-06 18:20:34.818723] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:09.551 [2024-12-06 18:20:34.819062] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:09.551 pt1 00:22:09.551 [2024-12-06 18:20:34.819256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:09.551 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.551 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:22:09.551 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:09.551 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:09.551 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:09.551 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:09.551 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:09.551 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:09.551 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:09.551 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:09.551 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:09.551 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:09.551 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:09.551 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.551 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:09.551 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.551 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:09.551 "name": "raid_bdev1", 00:22:09.551 "uuid": "79d73a70-cbcd-4605-86c2-0fd3fc74eadc", 00:22:09.551 "strip_size_kb": 0, 00:22:09.551 "state": "configuring", 00:22:09.551 "raid_level": "raid1", 00:22:09.551 "superblock": true, 00:22:09.551 "num_base_bdevs": 2, 00:22:09.551 "num_base_bdevs_discovered": 1, 00:22:09.551 "num_base_bdevs_operational": 2, 00:22:09.551 "base_bdevs_list": [ 00:22:09.551 { 00:22:09.551 "name": "pt1", 00:22:09.551 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:09.551 "is_configured": true, 00:22:09.551 "data_offset": 256, 00:22:09.551 "data_size": 7936 00:22:09.551 }, 00:22:09.551 { 00:22:09.551 "name": null, 00:22:09.551 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:09.551 "is_configured": false, 00:22:09.551 "data_offset": 256, 00:22:09.551 "data_size": 7936 00:22:09.551 } 00:22:09.551 ] 00:22:09.551 }' 00:22:09.551 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:09.551 18:20:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:10.116 18:20:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:22:10.116 18:20:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:22:10.116 18:20:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:10.116 18:20:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:10.116 18:20:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.116 18:20:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:10.116 [2024-12-06 18:20:35.351731] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:10.116 [2024-12-06 18:20:35.352098] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:10.116 [2024-12-06 18:20:35.352179] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:10.116 [2024-12-06 18:20:35.352439] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:10.116 [2024-12-06 18:20:35.352760] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:10.116 [2024-12-06 18:20:35.352946] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:10.116 [2024-12-06 18:20:35.353150] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:10.116 [2024-12-06 18:20:35.353309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:10.116 [2024-12-06 18:20:35.353554] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:10.116 [2024-12-06 18:20:35.353691] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:10.116 [2024-12-06 18:20:35.353852] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:10.116 [2024-12-06 18:20:35.354070] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:10.116 [2024-12-06 18:20:35.354189] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:22:10.116 [2024-12-06 18:20:35.354415] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:10.116 pt2 00:22:10.116 18:20:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.116 18:20:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:10.116 18:20:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:10.116 18:20:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:10.116 18:20:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:10.116 18:20:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:10.116 18:20:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:10.116 18:20:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:10.116 18:20:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:10.116 18:20:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:10.116 18:20:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:10.116 18:20:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:10.116 18:20:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:10.116 18:20:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.116 18:20:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:10.116 18:20:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.116 18:20:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:10.116 18:20:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.116 18:20:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:10.116 "name": "raid_bdev1", 00:22:10.116 "uuid": "79d73a70-cbcd-4605-86c2-0fd3fc74eadc", 00:22:10.116 "strip_size_kb": 0, 00:22:10.116 "state": "online", 00:22:10.116 "raid_level": "raid1", 00:22:10.117 "superblock": true, 00:22:10.117 "num_base_bdevs": 2, 00:22:10.117 "num_base_bdevs_discovered": 2, 00:22:10.117 "num_base_bdevs_operational": 2, 00:22:10.117 "base_bdevs_list": [ 00:22:10.117 { 00:22:10.117 "name": "pt1", 00:22:10.117 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:10.117 "is_configured": true, 00:22:10.117 "data_offset": 256, 00:22:10.117 "data_size": 7936 00:22:10.117 }, 00:22:10.117 { 00:22:10.117 "name": "pt2", 00:22:10.117 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:10.117 "is_configured": true, 00:22:10.117 "data_offset": 256, 00:22:10.117 "data_size": 7936 00:22:10.117 } 00:22:10.117 ] 00:22:10.117 }' 00:22:10.117 18:20:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:10.117 18:20:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:10.683 18:20:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:22:10.683 18:20:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:10.683 18:20:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:10.683 18:20:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:10.683 18:20:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:22:10.683 18:20:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:10.683 18:20:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:10.683 18:20:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:10.683 18:20:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.684 18:20:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:10.684 [2024-12-06 18:20:35.904236] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:10.684 18:20:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.684 18:20:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:10.684 "name": "raid_bdev1", 00:22:10.684 "aliases": [ 00:22:10.684 "79d73a70-cbcd-4605-86c2-0fd3fc74eadc" 00:22:10.684 ], 00:22:10.684 "product_name": "Raid Volume", 00:22:10.684 "block_size": 4128, 00:22:10.684 "num_blocks": 7936, 00:22:10.684 "uuid": "79d73a70-cbcd-4605-86c2-0fd3fc74eadc", 00:22:10.684 "md_size": 32, 00:22:10.684 "md_interleave": true, 00:22:10.684 "dif_type": 0, 00:22:10.684 "assigned_rate_limits": { 00:22:10.684 "rw_ios_per_sec": 0, 00:22:10.684 "rw_mbytes_per_sec": 0, 00:22:10.684 "r_mbytes_per_sec": 0, 00:22:10.684 "w_mbytes_per_sec": 0 00:22:10.684 }, 00:22:10.684 "claimed": false, 00:22:10.684 "zoned": false, 00:22:10.684 "supported_io_types": { 00:22:10.684 "read": true, 00:22:10.684 "write": true, 00:22:10.684 "unmap": false, 00:22:10.684 "flush": false, 00:22:10.684 "reset": true, 00:22:10.684 "nvme_admin": false, 00:22:10.684 "nvme_io": false, 00:22:10.684 "nvme_io_md": false, 00:22:10.684 "write_zeroes": true, 00:22:10.684 "zcopy": false, 00:22:10.684 "get_zone_info": false, 00:22:10.684 "zone_management": false, 00:22:10.684 "zone_append": false, 00:22:10.684 "compare": false, 00:22:10.684 "compare_and_write": false, 00:22:10.684 "abort": false, 00:22:10.684 "seek_hole": false, 00:22:10.684 "seek_data": false, 00:22:10.684 "copy": false, 00:22:10.684 "nvme_iov_md": false 00:22:10.684 }, 00:22:10.684 "memory_domains": [ 00:22:10.684 { 00:22:10.684 "dma_device_id": "system", 00:22:10.684 "dma_device_type": 1 00:22:10.684 }, 00:22:10.684 { 00:22:10.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:10.684 "dma_device_type": 2 00:22:10.684 }, 00:22:10.684 { 00:22:10.684 "dma_device_id": "system", 00:22:10.684 "dma_device_type": 1 00:22:10.684 }, 00:22:10.684 { 00:22:10.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:10.684 "dma_device_type": 2 00:22:10.684 } 00:22:10.684 ], 00:22:10.684 "driver_specific": { 00:22:10.684 "raid": { 00:22:10.684 "uuid": "79d73a70-cbcd-4605-86c2-0fd3fc74eadc", 00:22:10.684 "strip_size_kb": 0, 00:22:10.684 "state": "online", 00:22:10.684 "raid_level": "raid1", 00:22:10.684 "superblock": true, 00:22:10.684 "num_base_bdevs": 2, 00:22:10.684 "num_base_bdevs_discovered": 2, 00:22:10.684 "num_base_bdevs_operational": 2, 00:22:10.684 "base_bdevs_list": [ 00:22:10.684 { 00:22:10.684 "name": "pt1", 00:22:10.684 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:10.684 "is_configured": true, 00:22:10.684 "data_offset": 256, 00:22:10.684 "data_size": 7936 00:22:10.684 }, 00:22:10.684 { 00:22:10.684 "name": "pt2", 00:22:10.684 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:10.684 "is_configured": true, 00:22:10.684 "data_offset": 256, 00:22:10.684 "data_size": 7936 00:22:10.684 } 00:22:10.684 ] 00:22:10.684 } 00:22:10.684 } 00:22:10.684 }' 00:22:10.684 18:20:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:10.684 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:10.684 pt2' 00:22:10.684 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:10.684 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:22:10.684 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:10.684 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:10.684 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.684 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:10.684 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:10.684 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.684 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:22:10.684 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:22:10.684 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:10.684 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:10.684 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:10.684 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.684 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:10.684 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.684 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:22:10.684 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:22:10.684 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:10.684 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.684 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:10.684 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:22:10.684 [2024-12-06 18:20:36.164314] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:10.684 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.943 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 79d73a70-cbcd-4605-86c2-0fd3fc74eadc '!=' 79d73a70-cbcd-4605-86c2-0fd3fc74eadc ']' 00:22:10.943 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:22:10.943 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:10.943 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:22:10.943 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:22:10.943 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.943 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:10.943 [2024-12-06 18:20:36.236034] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:22:10.943 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.943 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:10.943 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:10.943 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:10.943 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:10.943 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:10.943 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:10.943 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:10.943 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:10.943 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:10.943 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:10.943 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.943 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.943 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:10.943 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:10.944 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.944 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:10.944 "name": "raid_bdev1", 00:22:10.944 "uuid": "79d73a70-cbcd-4605-86c2-0fd3fc74eadc", 00:22:10.944 "strip_size_kb": 0, 00:22:10.944 "state": "online", 00:22:10.944 "raid_level": "raid1", 00:22:10.944 "superblock": true, 00:22:10.944 "num_base_bdevs": 2, 00:22:10.944 "num_base_bdevs_discovered": 1, 00:22:10.944 "num_base_bdevs_operational": 1, 00:22:10.944 "base_bdevs_list": [ 00:22:10.944 { 00:22:10.944 "name": null, 00:22:10.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.944 "is_configured": false, 00:22:10.944 "data_offset": 0, 00:22:10.944 "data_size": 7936 00:22:10.944 }, 00:22:10.944 { 00:22:10.944 "name": "pt2", 00:22:10.944 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:10.944 "is_configured": true, 00:22:10.944 "data_offset": 256, 00:22:10.944 "data_size": 7936 00:22:10.944 } 00:22:10.944 ] 00:22:10.944 }' 00:22:10.944 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:10.944 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:11.511 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:11.511 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.511 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:11.511 [2024-12-06 18:20:36.760154] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:11.511 [2024-12-06 18:20:36.760218] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:11.511 [2024-12-06 18:20:36.760340] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:11.511 [2024-12-06 18:20:36.760418] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:11.511 [2024-12-06 18:20:36.760440] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:22:11.511 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.511 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.511 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:22:11.511 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.511 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:11.511 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.511 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:22:11.511 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:22:11.511 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:22:11.511 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:11.511 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:22:11.511 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.511 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:11.511 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.511 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:22:11.511 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:11.511 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:22:11.511 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:22:11.511 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:22:11.511 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:11.511 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.511 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:11.511 [2024-12-06 18:20:36.828262] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:11.511 [2024-12-06 18:20:36.828332] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:11.511 [2024-12-06 18:20:36.828375] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:22:11.511 [2024-12-06 18:20:36.828404] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:11.511 [2024-12-06 18:20:36.831270] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:11.511 [2024-12-06 18:20:36.831322] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:11.511 [2024-12-06 18:20:36.831409] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:11.511 [2024-12-06 18:20:36.831493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:11.511 [2024-12-06 18:20:36.831595] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:11.511 [2024-12-06 18:20:36.831619] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:11.511 [2024-12-06 18:20:36.831792] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:11.511 pt2 00:22:11.511 [2024-12-06 18:20:36.831905] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:11.511 [2024-12-06 18:20:36.831928] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:22:11.511 [2024-12-06 18:20:36.832017] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:11.511 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.511 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:11.511 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:11.511 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:11.511 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:11.511 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:11.511 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:11.511 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:11.511 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:11.511 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:11.512 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:11.512 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.512 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:11.512 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.512 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:11.512 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.512 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:11.512 "name": "raid_bdev1", 00:22:11.512 "uuid": "79d73a70-cbcd-4605-86c2-0fd3fc74eadc", 00:22:11.512 "strip_size_kb": 0, 00:22:11.512 "state": "online", 00:22:11.512 "raid_level": "raid1", 00:22:11.512 "superblock": true, 00:22:11.512 "num_base_bdevs": 2, 00:22:11.512 "num_base_bdevs_discovered": 1, 00:22:11.512 "num_base_bdevs_operational": 1, 00:22:11.512 "base_bdevs_list": [ 00:22:11.512 { 00:22:11.512 "name": null, 00:22:11.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:11.512 "is_configured": false, 00:22:11.512 "data_offset": 256, 00:22:11.512 "data_size": 7936 00:22:11.512 }, 00:22:11.512 { 00:22:11.512 "name": "pt2", 00:22:11.512 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:11.512 "is_configured": true, 00:22:11.512 "data_offset": 256, 00:22:11.512 "data_size": 7936 00:22:11.512 } 00:22:11.512 ] 00:22:11.512 }' 00:22:11.512 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:11.512 18:20:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:12.080 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:12.080 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.080 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:12.080 [2024-12-06 18:20:37.300413] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:12.080 [2024-12-06 18:20:37.300666] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:12.080 [2024-12-06 18:20:37.300841] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:12.080 [2024-12-06 18:20:37.300927] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:12.080 [2024-12-06 18:20:37.300944] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:22:12.080 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.080 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.080 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:22:12.080 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.080 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:12.080 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.080 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:22:12.080 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:22:12.080 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:22:12.080 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:12.080 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.080 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:12.080 [2024-12-06 18:20:37.360396] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:12.080 [2024-12-06 18:20:37.360660] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:12.080 [2024-12-06 18:20:37.360740] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:22:12.080 [2024-12-06 18:20:37.361055] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:12.080 [2024-12-06 18:20:37.364016] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:12.080 pt1 00:22:12.080 [2024-12-06 18:20:37.364226] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:12.080 [2024-12-06 18:20:37.364321] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:12.080 [2024-12-06 18:20:37.364389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:12.080 [2024-12-06 18:20:37.364587] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:22:12.080 [2024-12-06 18:20:37.364607] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:12.080 [2024-12-06 18:20:37.364633] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:22:12.080 [2024-12-06 18:20:37.364705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:12.080 [2024-12-06 18:20:37.364859] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:22:12.080 [2024-12-06 18:20:37.364876] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:12.080 [2024-12-06 18:20:37.364982] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:12.080 [2024-12-06 18:20:37.365069] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:22:12.080 [2024-12-06 18:20:37.365087] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:22:12.080 [2024-12-06 18:20:37.365190] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:12.080 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.080 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:22:12.080 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:12.080 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:12.080 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:12.080 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:12.080 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:12.080 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:12.081 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:12.081 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:12.081 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:12.081 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:12.081 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.081 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.081 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:12.081 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:12.081 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.081 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:12.081 "name": "raid_bdev1", 00:22:12.081 "uuid": "79d73a70-cbcd-4605-86c2-0fd3fc74eadc", 00:22:12.081 "strip_size_kb": 0, 00:22:12.081 "state": "online", 00:22:12.081 "raid_level": "raid1", 00:22:12.081 "superblock": true, 00:22:12.081 "num_base_bdevs": 2, 00:22:12.081 "num_base_bdevs_discovered": 1, 00:22:12.081 "num_base_bdevs_operational": 1, 00:22:12.081 "base_bdevs_list": [ 00:22:12.081 { 00:22:12.081 "name": null, 00:22:12.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.081 "is_configured": false, 00:22:12.081 "data_offset": 256, 00:22:12.081 "data_size": 7936 00:22:12.081 }, 00:22:12.081 { 00:22:12.081 "name": "pt2", 00:22:12.081 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:12.081 "is_configured": true, 00:22:12.081 "data_offset": 256, 00:22:12.081 "data_size": 7936 00:22:12.081 } 00:22:12.081 ] 00:22:12.081 }' 00:22:12.081 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:12.081 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:12.649 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:22:12.649 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.649 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:12.649 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:12.649 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.649 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:22:12.649 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:12.649 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:22:12.649 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.649 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:12.649 [2024-12-06 18:20:37.932806] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:12.649 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.649 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 79d73a70-cbcd-4605-86c2-0fd3fc74eadc '!=' 79d73a70-cbcd-4605-86c2-0fd3fc74eadc ']' 00:22:12.649 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89228 00:22:12.649 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89228 ']' 00:22:12.649 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89228 00:22:12.649 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:22:12.649 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:12.649 18:20:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89228 00:22:12.649 killing process with pid 89228 00:22:12.649 18:20:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:12.649 18:20:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:12.649 18:20:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89228' 00:22:12.649 18:20:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 89228 00:22:12.649 [2024-12-06 18:20:38.015940] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:12.649 [2024-12-06 18:20:38.016038] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:12.649 18:20:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 89228 00:22:12.649 [2024-12-06 18:20:38.016103] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:12.649 [2024-12-06 18:20:38.016144] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:22:12.909 [2024-12-06 18:20:38.202661] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:13.847 ************************************ 00:22:13.847 END TEST raid_superblock_test_md_interleaved 00:22:13.847 ************************************ 00:22:13.847 18:20:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:22:13.847 00:22:13.847 real 0m6.762s 00:22:13.847 user 0m10.625s 00:22:13.847 sys 0m1.008s 00:22:13.847 18:20:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:13.847 18:20:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:13.847 18:20:39 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:22:13.847 18:20:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:22:13.847 18:20:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:13.847 18:20:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:13.847 ************************************ 00:22:13.847 START TEST raid_rebuild_test_sb_md_interleaved 00:22:13.847 ************************************ 00:22:13.847 18:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:22:13.847 18:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:22:13.847 18:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:22:13.847 18:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:22:13.848 18:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:22:13.848 18:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:22:13.848 18:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:22:13.848 18:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:13.848 18:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:22:13.848 18:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:13.848 18:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:13.848 18:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:22:13.848 18:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:13.848 18:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:13.848 18:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:13.848 18:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:22:13.848 18:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:22:13.848 18:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:22:13.848 18:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:22:13.848 18:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:22:13.848 18:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:22:13.848 18:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:22:13.848 18:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:22:13.848 18:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:22:13.848 18:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:22:13.848 18:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89558 00:22:13.848 18:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:13.848 18:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89558 00:22:13.848 18:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89558 ']' 00:22:13.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:13.848 18:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:13.848 18:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:13.848 18:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:13.848 18:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:13.848 18:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:14.107 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:14.107 Zero copy mechanism will not be used. 00:22:14.107 [2024-12-06 18:20:39.466716] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:22:14.107 [2024-12-06 18:20:39.466921] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89558 ] 00:22:14.366 [2024-12-06 18:20:39.648559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.366 [2024-12-06 18:20:39.797549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:14.624 [2024-12-06 18:20:40.027126] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:14.624 [2024-12-06 18:20:40.027588] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:14.884 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:14.884 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:22:14.884 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:14.884 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:22:14.884 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.884 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:15.143 BaseBdev1_malloc 00:22:15.143 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.143 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:15.143 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.143 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:15.143 [2024-12-06 18:20:40.428683] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:15.143 [2024-12-06 18:20:40.428794] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:15.143 [2024-12-06 18:20:40.428859] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:15.143 [2024-12-06 18:20:40.428881] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:15.143 [2024-12-06 18:20:40.431542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:15.143 [2024-12-06 18:20:40.431594] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:15.143 BaseBdev1 00:22:15.143 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.143 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:15.143 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:22:15.143 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.143 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:15.143 BaseBdev2_malloc 00:22:15.143 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.143 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:15.143 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.143 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:15.143 [2024-12-06 18:20:40.481436] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:15.143 [2024-12-06 18:20:40.481907] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:15.143 [2024-12-06 18:20:40.481954] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:15.143 [2024-12-06 18:20:40.481978] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:15.143 [2024-12-06 18:20:40.484732] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:15.143 [2024-12-06 18:20:40.484790] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:15.144 BaseBdev2 00:22:15.144 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.144 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:22:15.144 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.144 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:15.144 spare_malloc 00:22:15.144 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.144 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:15.144 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.144 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:15.144 spare_delay 00:22:15.144 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.144 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:15.144 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.144 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:15.144 [2024-12-06 18:20:40.548854] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:15.144 [2024-12-06 18:20:40.549262] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:15.144 [2024-12-06 18:20:40.549309] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:15.144 [2024-12-06 18:20:40.549332] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:15.144 [2024-12-06 18:20:40.552040] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:15.144 [2024-12-06 18:20:40.552088] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:15.144 spare 00:22:15.144 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.144 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:22:15.144 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.144 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:15.144 [2024-12-06 18:20:40.556944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:15.144 [2024-12-06 18:20:40.559666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:15.144 [2024-12-06 18:20:40.559977] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:15.144 [2024-12-06 18:20:40.560003] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:15.144 [2024-12-06 18:20:40.560108] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:15.144 [2024-12-06 18:20:40.560231] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:15.144 [2024-12-06 18:20:40.560247] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:15.144 [2024-12-06 18:20:40.560345] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:15.144 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.144 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:15.144 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:15.144 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:15.144 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:15.144 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:15.144 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:15.144 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:15.144 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:15.144 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:15.144 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:15.144 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:15.144 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:15.144 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.144 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:15.144 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.144 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:15.144 "name": "raid_bdev1", 00:22:15.144 "uuid": "45ff06ec-d1fc-4e53-b4f8-cf5ea45ea120", 00:22:15.144 "strip_size_kb": 0, 00:22:15.144 "state": "online", 00:22:15.144 "raid_level": "raid1", 00:22:15.144 "superblock": true, 00:22:15.144 "num_base_bdevs": 2, 00:22:15.144 "num_base_bdevs_discovered": 2, 00:22:15.144 "num_base_bdevs_operational": 2, 00:22:15.144 "base_bdevs_list": [ 00:22:15.144 { 00:22:15.144 "name": "BaseBdev1", 00:22:15.144 "uuid": "0029f05e-aab6-5324-8252-5c423151cbed", 00:22:15.144 "is_configured": true, 00:22:15.144 "data_offset": 256, 00:22:15.144 "data_size": 7936 00:22:15.144 }, 00:22:15.144 { 00:22:15.144 "name": "BaseBdev2", 00:22:15.144 "uuid": "17a14fb6-25d5-5388-a606-4f844b589316", 00:22:15.144 "is_configured": true, 00:22:15.144 "data_offset": 256, 00:22:15.144 "data_size": 7936 00:22:15.144 } 00:22:15.144 ] 00:22:15.144 }' 00:22:15.144 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:15.144 18:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:15.711 18:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:15.711 18:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:22:15.711 18:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.711 18:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:15.711 [2024-12-06 18:20:41.089528] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:15.711 18:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.711 18:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:22:15.711 18:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:15.711 18:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:15.711 18:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.711 18:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:15.711 18:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.711 18:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:22:15.711 18:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:22:15.711 18:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:22:15.711 18:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:22:15.711 18:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.711 18:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:15.711 [2024-12-06 18:20:41.193065] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:15.711 18:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.711 18:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:15.711 18:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:15.711 18:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:15.711 18:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:15.711 18:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:15.711 18:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:15.711 18:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:15.711 18:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:15.711 18:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:15.711 18:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:15.711 18:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:15.711 18:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.711 18:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:15.711 18:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:15.711 18:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.982 18:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:15.982 "name": "raid_bdev1", 00:22:15.982 "uuid": "45ff06ec-d1fc-4e53-b4f8-cf5ea45ea120", 00:22:15.982 "strip_size_kb": 0, 00:22:15.982 "state": "online", 00:22:15.982 "raid_level": "raid1", 00:22:15.982 "superblock": true, 00:22:15.982 "num_base_bdevs": 2, 00:22:15.982 "num_base_bdevs_discovered": 1, 00:22:15.982 "num_base_bdevs_operational": 1, 00:22:15.982 "base_bdevs_list": [ 00:22:15.982 { 00:22:15.982 "name": null, 00:22:15.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:15.982 "is_configured": false, 00:22:15.982 "data_offset": 0, 00:22:15.982 "data_size": 7936 00:22:15.982 }, 00:22:15.982 { 00:22:15.982 "name": "BaseBdev2", 00:22:15.982 "uuid": "17a14fb6-25d5-5388-a606-4f844b589316", 00:22:15.982 "is_configured": true, 00:22:15.982 "data_offset": 256, 00:22:15.982 "data_size": 7936 00:22:15.982 } 00:22:15.982 ] 00:22:15.982 }' 00:22:15.982 18:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:15.982 18:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:16.247 18:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:16.247 18:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.247 18:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:16.247 [2024-12-06 18:20:41.693335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:16.247 [2024-12-06 18:20:41.711389] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:16.247 18:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.247 18:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:22:16.247 [2024-12-06 18:20:41.714120] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:17.622 18:20:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:17.622 18:20:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:17.622 18:20:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:17.622 18:20:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:17.622 18:20:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:17.622 18:20:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:17.622 18:20:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:17.622 18:20:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.622 18:20:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:17.622 18:20:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.622 18:20:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:17.622 "name": "raid_bdev1", 00:22:17.622 "uuid": "45ff06ec-d1fc-4e53-b4f8-cf5ea45ea120", 00:22:17.622 "strip_size_kb": 0, 00:22:17.622 "state": "online", 00:22:17.622 "raid_level": "raid1", 00:22:17.622 "superblock": true, 00:22:17.622 "num_base_bdevs": 2, 00:22:17.622 "num_base_bdevs_discovered": 2, 00:22:17.622 "num_base_bdevs_operational": 2, 00:22:17.622 "process": { 00:22:17.622 "type": "rebuild", 00:22:17.622 "target": "spare", 00:22:17.622 "progress": { 00:22:17.622 "blocks": 2560, 00:22:17.622 "percent": 32 00:22:17.622 } 00:22:17.622 }, 00:22:17.622 "base_bdevs_list": [ 00:22:17.622 { 00:22:17.622 "name": "spare", 00:22:17.622 "uuid": "cf171538-4a1a-5242-8eb9-f61f28c028af", 00:22:17.622 "is_configured": true, 00:22:17.622 "data_offset": 256, 00:22:17.622 "data_size": 7936 00:22:17.622 }, 00:22:17.622 { 00:22:17.622 "name": "BaseBdev2", 00:22:17.622 "uuid": "17a14fb6-25d5-5388-a606-4f844b589316", 00:22:17.622 "is_configured": true, 00:22:17.622 "data_offset": 256, 00:22:17.622 "data_size": 7936 00:22:17.622 } 00:22:17.622 ] 00:22:17.622 }' 00:22:17.622 18:20:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:17.622 18:20:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:17.622 18:20:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:17.622 18:20:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:17.622 18:20:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:17.622 18:20:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.622 18:20:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:17.622 [2024-12-06 18:20:42.895972] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:17.622 [2024-12-06 18:20:42.924704] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:17.622 [2024-12-06 18:20:42.924956] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:17.622 [2024-12-06 18:20:42.924986] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:17.622 [2024-12-06 18:20:42.925008] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:17.622 18:20:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.622 18:20:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:17.622 18:20:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:17.622 18:20:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:17.622 18:20:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:17.622 18:20:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:17.622 18:20:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:17.622 18:20:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:17.622 18:20:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:17.622 18:20:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:17.622 18:20:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:17.622 18:20:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:17.622 18:20:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.622 18:20:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:17.622 18:20:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:17.622 18:20:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.622 18:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:17.622 "name": "raid_bdev1", 00:22:17.623 "uuid": "45ff06ec-d1fc-4e53-b4f8-cf5ea45ea120", 00:22:17.623 "strip_size_kb": 0, 00:22:17.623 "state": "online", 00:22:17.623 "raid_level": "raid1", 00:22:17.623 "superblock": true, 00:22:17.623 "num_base_bdevs": 2, 00:22:17.623 "num_base_bdevs_discovered": 1, 00:22:17.623 "num_base_bdevs_operational": 1, 00:22:17.623 "base_bdevs_list": [ 00:22:17.623 { 00:22:17.623 "name": null, 00:22:17.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.623 "is_configured": false, 00:22:17.623 "data_offset": 0, 00:22:17.623 "data_size": 7936 00:22:17.623 }, 00:22:17.623 { 00:22:17.623 "name": "BaseBdev2", 00:22:17.623 "uuid": "17a14fb6-25d5-5388-a606-4f844b589316", 00:22:17.623 "is_configured": true, 00:22:17.623 "data_offset": 256, 00:22:17.623 "data_size": 7936 00:22:17.623 } 00:22:17.623 ] 00:22:17.623 }' 00:22:17.623 18:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:17.623 18:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:18.190 18:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:18.190 18:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:18.190 18:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:18.190 18:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:18.190 18:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:18.190 18:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:18.190 18:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.190 18:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:18.190 18:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:18.190 18:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.190 18:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:18.190 "name": "raid_bdev1", 00:22:18.190 "uuid": "45ff06ec-d1fc-4e53-b4f8-cf5ea45ea120", 00:22:18.190 "strip_size_kb": 0, 00:22:18.190 "state": "online", 00:22:18.190 "raid_level": "raid1", 00:22:18.190 "superblock": true, 00:22:18.190 "num_base_bdevs": 2, 00:22:18.190 "num_base_bdevs_discovered": 1, 00:22:18.190 "num_base_bdevs_operational": 1, 00:22:18.190 "base_bdevs_list": [ 00:22:18.190 { 00:22:18.190 "name": null, 00:22:18.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:18.190 "is_configured": false, 00:22:18.190 "data_offset": 0, 00:22:18.190 "data_size": 7936 00:22:18.190 }, 00:22:18.190 { 00:22:18.190 "name": "BaseBdev2", 00:22:18.190 "uuid": "17a14fb6-25d5-5388-a606-4f844b589316", 00:22:18.190 "is_configured": true, 00:22:18.190 "data_offset": 256, 00:22:18.190 "data_size": 7936 00:22:18.190 } 00:22:18.190 ] 00:22:18.190 }' 00:22:18.190 18:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:18.190 18:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:18.190 18:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:18.190 18:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:18.190 18:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:18.190 18:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.190 18:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:18.190 [2024-12-06 18:20:43.687420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:18.190 [2024-12-06 18:20:43.705306] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:18.190 18:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.190 18:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:22:18.448 [2024-12-06 18:20:43.708341] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:19.383 18:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:19.383 18:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:19.383 18:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:19.383 18:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:19.383 18:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:19.383 18:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:19.383 18:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:19.383 18:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.383 18:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:19.383 18:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.383 18:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:19.383 "name": "raid_bdev1", 00:22:19.383 "uuid": "45ff06ec-d1fc-4e53-b4f8-cf5ea45ea120", 00:22:19.383 "strip_size_kb": 0, 00:22:19.383 "state": "online", 00:22:19.383 "raid_level": "raid1", 00:22:19.383 "superblock": true, 00:22:19.383 "num_base_bdevs": 2, 00:22:19.383 "num_base_bdevs_discovered": 2, 00:22:19.383 "num_base_bdevs_operational": 2, 00:22:19.383 "process": { 00:22:19.383 "type": "rebuild", 00:22:19.383 "target": "spare", 00:22:19.383 "progress": { 00:22:19.383 "blocks": 2304, 00:22:19.383 "percent": 29 00:22:19.383 } 00:22:19.383 }, 00:22:19.383 "base_bdevs_list": [ 00:22:19.383 { 00:22:19.383 "name": "spare", 00:22:19.383 "uuid": "cf171538-4a1a-5242-8eb9-f61f28c028af", 00:22:19.383 "is_configured": true, 00:22:19.383 "data_offset": 256, 00:22:19.383 "data_size": 7936 00:22:19.383 }, 00:22:19.383 { 00:22:19.383 "name": "BaseBdev2", 00:22:19.383 "uuid": "17a14fb6-25d5-5388-a606-4f844b589316", 00:22:19.383 "is_configured": true, 00:22:19.383 "data_offset": 256, 00:22:19.383 "data_size": 7936 00:22:19.383 } 00:22:19.383 ] 00:22:19.383 }' 00:22:19.383 18:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:19.383 18:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:19.383 18:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:19.383 18:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:19.383 18:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:22:19.383 18:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:22:19.383 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:22:19.383 18:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:22:19.383 18:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:22:19.383 18:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:22:19.383 18:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=800 00:22:19.384 18:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:19.384 18:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:19.384 18:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:19.384 18:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:19.384 18:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:19.384 18:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:19.384 18:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:19.384 18:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.384 18:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:19.384 18:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:19.642 18:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.642 18:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:19.642 "name": "raid_bdev1", 00:22:19.642 "uuid": "45ff06ec-d1fc-4e53-b4f8-cf5ea45ea120", 00:22:19.642 "strip_size_kb": 0, 00:22:19.642 "state": "online", 00:22:19.642 "raid_level": "raid1", 00:22:19.642 "superblock": true, 00:22:19.642 "num_base_bdevs": 2, 00:22:19.642 "num_base_bdevs_discovered": 2, 00:22:19.642 "num_base_bdevs_operational": 2, 00:22:19.642 "process": { 00:22:19.642 "type": "rebuild", 00:22:19.642 "target": "spare", 00:22:19.642 "progress": { 00:22:19.642 "blocks": 2816, 00:22:19.642 "percent": 35 00:22:19.642 } 00:22:19.642 }, 00:22:19.642 "base_bdevs_list": [ 00:22:19.642 { 00:22:19.642 "name": "spare", 00:22:19.642 "uuid": "cf171538-4a1a-5242-8eb9-f61f28c028af", 00:22:19.642 "is_configured": true, 00:22:19.642 "data_offset": 256, 00:22:19.642 "data_size": 7936 00:22:19.642 }, 00:22:19.642 { 00:22:19.642 "name": "BaseBdev2", 00:22:19.642 "uuid": "17a14fb6-25d5-5388-a606-4f844b589316", 00:22:19.642 "is_configured": true, 00:22:19.642 "data_offset": 256, 00:22:19.642 "data_size": 7936 00:22:19.642 } 00:22:19.642 ] 00:22:19.642 }' 00:22:19.642 18:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:19.642 18:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:19.642 18:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:19.642 18:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:19.642 18:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:20.577 18:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:20.577 18:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:20.577 18:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:20.577 18:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:20.577 18:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:20.577 18:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:20.577 18:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.577 18:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.577 18:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:20.577 18:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:20.577 18:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.577 18:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:20.577 "name": "raid_bdev1", 00:22:20.577 "uuid": "45ff06ec-d1fc-4e53-b4f8-cf5ea45ea120", 00:22:20.577 "strip_size_kb": 0, 00:22:20.577 "state": "online", 00:22:20.577 "raid_level": "raid1", 00:22:20.577 "superblock": true, 00:22:20.577 "num_base_bdevs": 2, 00:22:20.577 "num_base_bdevs_discovered": 2, 00:22:20.577 "num_base_bdevs_operational": 2, 00:22:20.577 "process": { 00:22:20.577 "type": "rebuild", 00:22:20.577 "target": "spare", 00:22:20.577 "progress": { 00:22:20.577 "blocks": 5888, 00:22:20.577 "percent": 74 00:22:20.577 } 00:22:20.577 }, 00:22:20.577 "base_bdevs_list": [ 00:22:20.577 { 00:22:20.577 "name": "spare", 00:22:20.577 "uuid": "cf171538-4a1a-5242-8eb9-f61f28c028af", 00:22:20.577 "is_configured": true, 00:22:20.577 "data_offset": 256, 00:22:20.577 "data_size": 7936 00:22:20.577 }, 00:22:20.577 { 00:22:20.577 "name": "BaseBdev2", 00:22:20.577 "uuid": "17a14fb6-25d5-5388-a606-4f844b589316", 00:22:20.577 "is_configured": true, 00:22:20.577 "data_offset": 256, 00:22:20.577 "data_size": 7936 00:22:20.577 } 00:22:20.577 ] 00:22:20.577 }' 00:22:20.577 18:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:20.835 18:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:20.835 18:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:20.835 18:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:20.835 18:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:21.402 [2024-12-06 18:20:46.838756] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:21.402 [2024-12-06 18:20:46.838906] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:21.402 [2024-12-06 18:20:46.839120] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:21.970 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:21.970 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:21.970 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:21.970 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:21.970 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:21.970 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:21.970 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:21.970 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.970 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:21.970 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:21.970 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.970 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:21.970 "name": "raid_bdev1", 00:22:21.970 "uuid": "45ff06ec-d1fc-4e53-b4f8-cf5ea45ea120", 00:22:21.970 "strip_size_kb": 0, 00:22:21.970 "state": "online", 00:22:21.970 "raid_level": "raid1", 00:22:21.970 "superblock": true, 00:22:21.970 "num_base_bdevs": 2, 00:22:21.970 "num_base_bdevs_discovered": 2, 00:22:21.970 "num_base_bdevs_operational": 2, 00:22:21.970 "base_bdevs_list": [ 00:22:21.970 { 00:22:21.970 "name": "spare", 00:22:21.970 "uuid": "cf171538-4a1a-5242-8eb9-f61f28c028af", 00:22:21.970 "is_configured": true, 00:22:21.970 "data_offset": 256, 00:22:21.970 "data_size": 7936 00:22:21.970 }, 00:22:21.970 { 00:22:21.970 "name": "BaseBdev2", 00:22:21.970 "uuid": "17a14fb6-25d5-5388-a606-4f844b589316", 00:22:21.970 "is_configured": true, 00:22:21.970 "data_offset": 256, 00:22:21.970 "data_size": 7936 00:22:21.970 } 00:22:21.970 ] 00:22:21.970 }' 00:22:21.970 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:21.970 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:21.971 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:21.971 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:22:21.971 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:22:21.971 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:21.971 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:21.971 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:21.971 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:21.971 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:21.971 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:21.971 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:21.971 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.971 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:21.971 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.971 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:21.971 "name": "raid_bdev1", 00:22:21.971 "uuid": "45ff06ec-d1fc-4e53-b4f8-cf5ea45ea120", 00:22:21.971 "strip_size_kb": 0, 00:22:21.971 "state": "online", 00:22:21.971 "raid_level": "raid1", 00:22:21.971 "superblock": true, 00:22:21.971 "num_base_bdevs": 2, 00:22:21.971 "num_base_bdevs_discovered": 2, 00:22:21.971 "num_base_bdevs_operational": 2, 00:22:21.971 "base_bdevs_list": [ 00:22:21.971 { 00:22:21.971 "name": "spare", 00:22:21.971 "uuid": "cf171538-4a1a-5242-8eb9-f61f28c028af", 00:22:21.971 "is_configured": true, 00:22:21.971 "data_offset": 256, 00:22:21.971 "data_size": 7936 00:22:21.971 }, 00:22:21.971 { 00:22:21.971 "name": "BaseBdev2", 00:22:21.971 "uuid": "17a14fb6-25d5-5388-a606-4f844b589316", 00:22:21.971 "is_configured": true, 00:22:21.971 "data_offset": 256, 00:22:21.971 "data_size": 7936 00:22:21.971 } 00:22:21.971 ] 00:22:21.971 }' 00:22:21.971 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:21.971 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:21.971 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:21.971 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:21.971 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:21.971 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:21.971 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:21.971 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:21.971 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:21.971 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:21.971 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:21.971 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:21.971 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:21.971 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:21.971 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:21.971 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.971 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:21.971 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:22.229 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.229 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:22.229 "name": "raid_bdev1", 00:22:22.229 "uuid": "45ff06ec-d1fc-4e53-b4f8-cf5ea45ea120", 00:22:22.229 "strip_size_kb": 0, 00:22:22.229 "state": "online", 00:22:22.229 "raid_level": "raid1", 00:22:22.229 "superblock": true, 00:22:22.229 "num_base_bdevs": 2, 00:22:22.229 "num_base_bdevs_discovered": 2, 00:22:22.229 "num_base_bdevs_operational": 2, 00:22:22.229 "base_bdevs_list": [ 00:22:22.229 { 00:22:22.229 "name": "spare", 00:22:22.229 "uuid": "cf171538-4a1a-5242-8eb9-f61f28c028af", 00:22:22.229 "is_configured": true, 00:22:22.229 "data_offset": 256, 00:22:22.229 "data_size": 7936 00:22:22.229 }, 00:22:22.229 { 00:22:22.229 "name": "BaseBdev2", 00:22:22.229 "uuid": "17a14fb6-25d5-5388-a606-4f844b589316", 00:22:22.229 "is_configured": true, 00:22:22.229 "data_offset": 256, 00:22:22.229 "data_size": 7936 00:22:22.229 } 00:22:22.229 ] 00:22:22.229 }' 00:22:22.229 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:22.229 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:22.488 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:22.488 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.488 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:22.488 [2024-12-06 18:20:47.993074] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:22.488 [2024-12-06 18:20:47.994513] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:22.488 [2024-12-06 18:20:47.994683] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:22.488 [2024-12-06 18:20:47.994808] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:22.488 [2024-12-06 18:20:47.994828] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:22.488 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.488 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:22.488 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.488 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:22.488 18:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:22:22.747 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.747 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:22:22.747 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:22:22.747 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:22:22.747 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:22:22.747 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.747 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:22.747 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.747 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:22.747 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.747 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:22.747 [2024-12-06 18:20:48.073040] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:22.747 [2024-12-06 18:20:48.073150] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:22.747 [2024-12-06 18:20:48.073188] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:22:22.747 [2024-12-06 18:20:48.073206] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:22.747 [2024-12-06 18:20:48.076074] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:22.747 [2024-12-06 18:20:48.076312] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:22.747 [2024-12-06 18:20:48.076418] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:22.747 [2024-12-06 18:20:48.076491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:22.747 [2024-12-06 18:20:48.076657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:22.747 spare 00:22:22.747 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.747 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:22:22.747 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.747 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:22.747 [2024-12-06 18:20:48.176816] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:22:22.747 [2024-12-06 18:20:48.177066] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:22.747 [2024-12-06 18:20:48.177282] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:22:22.747 [2024-12-06 18:20:48.177618] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:22:22.747 [2024-12-06 18:20:48.177741] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:22:22.747 [2024-12-06 18:20:48.177956] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:22.747 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.747 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:22.747 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:22.747 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:22.747 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:22.747 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:22.747 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:22.747 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:22.747 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:22.747 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:22.747 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:22.747 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:22.747 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.747 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:22.747 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:22.747 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.747 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:22.747 "name": "raid_bdev1", 00:22:22.747 "uuid": "45ff06ec-d1fc-4e53-b4f8-cf5ea45ea120", 00:22:22.747 "strip_size_kb": 0, 00:22:22.747 "state": "online", 00:22:22.747 "raid_level": "raid1", 00:22:22.747 "superblock": true, 00:22:22.747 "num_base_bdevs": 2, 00:22:22.747 "num_base_bdevs_discovered": 2, 00:22:22.747 "num_base_bdevs_operational": 2, 00:22:22.747 "base_bdevs_list": [ 00:22:22.747 { 00:22:22.747 "name": "spare", 00:22:22.747 "uuid": "cf171538-4a1a-5242-8eb9-f61f28c028af", 00:22:22.747 "is_configured": true, 00:22:22.747 "data_offset": 256, 00:22:22.747 "data_size": 7936 00:22:22.747 }, 00:22:22.747 { 00:22:22.747 "name": "BaseBdev2", 00:22:22.747 "uuid": "17a14fb6-25d5-5388-a606-4f844b589316", 00:22:22.747 "is_configured": true, 00:22:22.747 "data_offset": 256, 00:22:22.747 "data_size": 7936 00:22:22.747 } 00:22:22.747 ] 00:22:22.747 }' 00:22:22.747 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:22.747 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:23.315 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:23.315 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:23.315 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:23.315 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:23.315 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:23.315 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:23.315 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:23.315 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.315 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:23.315 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.315 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:23.315 "name": "raid_bdev1", 00:22:23.315 "uuid": "45ff06ec-d1fc-4e53-b4f8-cf5ea45ea120", 00:22:23.315 "strip_size_kb": 0, 00:22:23.315 "state": "online", 00:22:23.315 "raid_level": "raid1", 00:22:23.315 "superblock": true, 00:22:23.315 "num_base_bdevs": 2, 00:22:23.315 "num_base_bdevs_discovered": 2, 00:22:23.315 "num_base_bdevs_operational": 2, 00:22:23.315 "base_bdevs_list": [ 00:22:23.315 { 00:22:23.315 "name": "spare", 00:22:23.315 "uuid": "cf171538-4a1a-5242-8eb9-f61f28c028af", 00:22:23.315 "is_configured": true, 00:22:23.315 "data_offset": 256, 00:22:23.315 "data_size": 7936 00:22:23.315 }, 00:22:23.315 { 00:22:23.315 "name": "BaseBdev2", 00:22:23.315 "uuid": "17a14fb6-25d5-5388-a606-4f844b589316", 00:22:23.315 "is_configured": true, 00:22:23.315 "data_offset": 256, 00:22:23.315 "data_size": 7936 00:22:23.315 } 00:22:23.315 ] 00:22:23.315 }' 00:22:23.315 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:23.315 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:23.315 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:23.574 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:23.574 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:23.574 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.574 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:23.574 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:23.574 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.574 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:22:23.574 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:23.574 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.574 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:23.574 [2024-12-06 18:20:48.902247] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:23.574 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.574 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:23.574 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:23.574 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:23.574 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:23.574 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:23.574 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:23.574 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:23.574 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:23.574 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:23.574 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:23.574 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:23.574 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.574 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:23.574 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:23.574 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.575 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:23.575 "name": "raid_bdev1", 00:22:23.575 "uuid": "45ff06ec-d1fc-4e53-b4f8-cf5ea45ea120", 00:22:23.575 "strip_size_kb": 0, 00:22:23.575 "state": "online", 00:22:23.575 "raid_level": "raid1", 00:22:23.575 "superblock": true, 00:22:23.575 "num_base_bdevs": 2, 00:22:23.575 "num_base_bdevs_discovered": 1, 00:22:23.575 "num_base_bdevs_operational": 1, 00:22:23.575 "base_bdevs_list": [ 00:22:23.575 { 00:22:23.575 "name": null, 00:22:23.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:23.575 "is_configured": false, 00:22:23.575 "data_offset": 0, 00:22:23.575 "data_size": 7936 00:22:23.575 }, 00:22:23.575 { 00:22:23.575 "name": "BaseBdev2", 00:22:23.575 "uuid": "17a14fb6-25d5-5388-a606-4f844b589316", 00:22:23.575 "is_configured": true, 00:22:23.575 "data_offset": 256, 00:22:23.575 "data_size": 7936 00:22:23.575 } 00:22:23.575 ] 00:22:23.575 }' 00:22:23.575 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:23.575 18:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:24.142 18:20:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:24.142 18:20:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.142 18:20:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:24.142 [2024-12-06 18:20:49.398427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:24.142 [2024-12-06 18:20:49.399001] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:24.142 [2024-12-06 18:20:49.399038] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:24.142 [2024-12-06 18:20:49.399102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:24.142 [2024-12-06 18:20:49.415744] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:22:24.142 18:20:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.142 18:20:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:22:24.142 [2024-12-06 18:20:49.418488] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:25.075 18:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:25.075 18:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:25.075 18:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:25.075 18:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:25.075 18:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:25.075 18:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:25.076 18:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:25.076 18:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.076 18:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:25.076 18:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.076 18:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:25.076 "name": "raid_bdev1", 00:22:25.076 "uuid": "45ff06ec-d1fc-4e53-b4f8-cf5ea45ea120", 00:22:25.076 "strip_size_kb": 0, 00:22:25.076 "state": "online", 00:22:25.076 "raid_level": "raid1", 00:22:25.076 "superblock": true, 00:22:25.076 "num_base_bdevs": 2, 00:22:25.076 "num_base_bdevs_discovered": 2, 00:22:25.076 "num_base_bdevs_operational": 2, 00:22:25.076 "process": { 00:22:25.076 "type": "rebuild", 00:22:25.076 "target": "spare", 00:22:25.076 "progress": { 00:22:25.076 "blocks": 2304, 00:22:25.076 "percent": 29 00:22:25.076 } 00:22:25.076 }, 00:22:25.076 "base_bdevs_list": [ 00:22:25.076 { 00:22:25.076 "name": "spare", 00:22:25.076 "uuid": "cf171538-4a1a-5242-8eb9-f61f28c028af", 00:22:25.076 "is_configured": true, 00:22:25.076 "data_offset": 256, 00:22:25.076 "data_size": 7936 00:22:25.076 }, 00:22:25.076 { 00:22:25.076 "name": "BaseBdev2", 00:22:25.076 "uuid": "17a14fb6-25d5-5388-a606-4f844b589316", 00:22:25.076 "is_configured": true, 00:22:25.076 "data_offset": 256, 00:22:25.076 "data_size": 7936 00:22:25.076 } 00:22:25.076 ] 00:22:25.076 }' 00:22:25.076 18:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:25.076 18:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:25.076 18:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:25.076 18:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:25.076 18:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:22:25.076 18:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.076 18:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:25.076 [2024-12-06 18:20:50.568121] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:25.341 [2024-12-06 18:20:50.630139] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:25.341 [2024-12-06 18:20:50.630426] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:25.341 [2024-12-06 18:20:50.630459] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:25.341 [2024-12-06 18:20:50.630477] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:25.341 18:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.341 18:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:25.341 18:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:25.341 18:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:25.341 18:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:25.341 18:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:25.341 18:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:25.341 18:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:25.341 18:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:25.341 18:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:25.341 18:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:25.341 18:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:25.341 18:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.341 18:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:25.341 18:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:25.341 18:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.341 18:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:25.341 "name": "raid_bdev1", 00:22:25.341 "uuid": "45ff06ec-d1fc-4e53-b4f8-cf5ea45ea120", 00:22:25.341 "strip_size_kb": 0, 00:22:25.341 "state": "online", 00:22:25.341 "raid_level": "raid1", 00:22:25.341 "superblock": true, 00:22:25.341 "num_base_bdevs": 2, 00:22:25.341 "num_base_bdevs_discovered": 1, 00:22:25.341 "num_base_bdevs_operational": 1, 00:22:25.341 "base_bdevs_list": [ 00:22:25.341 { 00:22:25.341 "name": null, 00:22:25.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:25.341 "is_configured": false, 00:22:25.341 "data_offset": 0, 00:22:25.341 "data_size": 7936 00:22:25.341 }, 00:22:25.341 { 00:22:25.341 "name": "BaseBdev2", 00:22:25.341 "uuid": "17a14fb6-25d5-5388-a606-4f844b589316", 00:22:25.341 "is_configured": true, 00:22:25.341 "data_offset": 256, 00:22:25.341 "data_size": 7936 00:22:25.341 } 00:22:25.341 ] 00:22:25.341 }' 00:22:25.341 18:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:25.341 18:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:25.916 18:20:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:25.916 18:20:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.916 18:20:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:25.916 [2024-12-06 18:20:51.176682] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:25.916 [2024-12-06 18:20:51.177006] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:25.916 [2024-12-06 18:20:51.177064] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:22:25.916 [2024-12-06 18:20:51.177087] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:25.916 [2024-12-06 18:20:51.177393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:25.916 [2024-12-06 18:20:51.177424] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:25.916 [2024-12-06 18:20:51.177515] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:25.916 [2024-12-06 18:20:51.177542] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:25.916 [2024-12-06 18:20:51.177558] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:25.916 [2024-12-06 18:20:51.177593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:25.916 spare 00:22:25.916 [2024-12-06 18:20:51.194248] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:22:25.916 18:20:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.916 18:20:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:22:25.916 [2024-12-06 18:20:51.196944] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:26.851 18:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:26.851 18:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:26.851 18:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:26.851 18:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:26.851 18:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:26.851 18:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:26.851 18:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:26.851 18:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.851 18:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:26.851 18:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.851 18:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:26.851 "name": "raid_bdev1", 00:22:26.851 "uuid": "45ff06ec-d1fc-4e53-b4f8-cf5ea45ea120", 00:22:26.851 "strip_size_kb": 0, 00:22:26.851 "state": "online", 00:22:26.851 "raid_level": "raid1", 00:22:26.851 "superblock": true, 00:22:26.851 "num_base_bdevs": 2, 00:22:26.851 "num_base_bdevs_discovered": 2, 00:22:26.851 "num_base_bdevs_operational": 2, 00:22:26.851 "process": { 00:22:26.851 "type": "rebuild", 00:22:26.851 "target": "spare", 00:22:26.851 "progress": { 00:22:26.851 "blocks": 2560, 00:22:26.851 "percent": 32 00:22:26.851 } 00:22:26.851 }, 00:22:26.851 "base_bdevs_list": [ 00:22:26.851 { 00:22:26.851 "name": "spare", 00:22:26.851 "uuid": "cf171538-4a1a-5242-8eb9-f61f28c028af", 00:22:26.851 "is_configured": true, 00:22:26.851 "data_offset": 256, 00:22:26.851 "data_size": 7936 00:22:26.851 }, 00:22:26.851 { 00:22:26.851 "name": "BaseBdev2", 00:22:26.851 "uuid": "17a14fb6-25d5-5388-a606-4f844b589316", 00:22:26.851 "is_configured": true, 00:22:26.851 "data_offset": 256, 00:22:26.851 "data_size": 7936 00:22:26.851 } 00:22:26.851 ] 00:22:26.851 }' 00:22:26.851 18:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:26.851 18:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:26.851 18:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:27.112 18:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:27.112 18:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:22:27.112 18:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.112 18:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:27.112 [2024-12-06 18:20:52.378440] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:27.112 [2024-12-06 18:20:52.408135] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:27.112 [2024-12-06 18:20:52.408384] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:27.112 [2024-12-06 18:20:52.408426] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:27.112 [2024-12-06 18:20:52.408443] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:27.112 18:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.112 18:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:27.112 18:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:27.112 18:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:27.112 18:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:27.112 18:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:27.112 18:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:27.112 18:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:27.112 18:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:27.112 18:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:27.112 18:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:27.112 18:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:27.112 18:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:27.112 18:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.112 18:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:27.112 18:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.112 18:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:27.112 "name": "raid_bdev1", 00:22:27.112 "uuid": "45ff06ec-d1fc-4e53-b4f8-cf5ea45ea120", 00:22:27.112 "strip_size_kb": 0, 00:22:27.112 "state": "online", 00:22:27.112 "raid_level": "raid1", 00:22:27.112 "superblock": true, 00:22:27.112 "num_base_bdevs": 2, 00:22:27.112 "num_base_bdevs_discovered": 1, 00:22:27.112 "num_base_bdevs_operational": 1, 00:22:27.112 "base_bdevs_list": [ 00:22:27.112 { 00:22:27.112 "name": null, 00:22:27.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:27.112 "is_configured": false, 00:22:27.112 "data_offset": 0, 00:22:27.112 "data_size": 7936 00:22:27.112 }, 00:22:27.112 { 00:22:27.112 "name": "BaseBdev2", 00:22:27.112 "uuid": "17a14fb6-25d5-5388-a606-4f844b589316", 00:22:27.112 "is_configured": true, 00:22:27.112 "data_offset": 256, 00:22:27.112 "data_size": 7936 00:22:27.112 } 00:22:27.112 ] 00:22:27.112 }' 00:22:27.112 18:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:27.112 18:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:27.680 18:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:27.680 18:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:27.680 18:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:27.680 18:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:27.680 18:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:27.680 18:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:27.680 18:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.680 18:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:27.680 18:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:27.680 18:20:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.680 18:20:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:27.680 "name": "raid_bdev1", 00:22:27.680 "uuid": "45ff06ec-d1fc-4e53-b4f8-cf5ea45ea120", 00:22:27.680 "strip_size_kb": 0, 00:22:27.680 "state": "online", 00:22:27.680 "raid_level": "raid1", 00:22:27.680 "superblock": true, 00:22:27.680 "num_base_bdevs": 2, 00:22:27.680 "num_base_bdevs_discovered": 1, 00:22:27.680 "num_base_bdevs_operational": 1, 00:22:27.680 "base_bdevs_list": [ 00:22:27.680 { 00:22:27.680 "name": null, 00:22:27.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:27.680 "is_configured": false, 00:22:27.680 "data_offset": 0, 00:22:27.680 "data_size": 7936 00:22:27.680 }, 00:22:27.680 { 00:22:27.680 "name": "BaseBdev2", 00:22:27.680 "uuid": "17a14fb6-25d5-5388-a606-4f844b589316", 00:22:27.680 "is_configured": true, 00:22:27.680 "data_offset": 256, 00:22:27.680 "data_size": 7936 00:22:27.680 } 00:22:27.680 ] 00:22:27.680 }' 00:22:27.680 18:20:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:27.680 18:20:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:27.680 18:20:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:27.680 18:20:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:27.680 18:20:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:22:27.680 18:20:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.680 18:20:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:27.680 18:20:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.680 18:20:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:27.680 18:20:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.680 18:20:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:27.680 [2024-12-06 18:20:53.149519] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:27.680 [2024-12-06 18:20:53.149599] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:27.680 [2024-12-06 18:20:53.149638] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:22:27.680 [2024-12-06 18:20:53.149654] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:27.680 [2024-12-06 18:20:53.149949] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:27.680 [2024-12-06 18:20:53.149975] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:27.680 [2024-12-06 18:20:53.150052] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:27.680 [2024-12-06 18:20:53.150074] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:27.680 [2024-12-06 18:20:53.150089] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:27.680 [2024-12-06 18:20:53.150104] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:22:27.680 BaseBdev1 00:22:27.680 18:20:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.680 18:20:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:22:29.057 18:20:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:29.057 18:20:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:29.057 18:20:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:29.057 18:20:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:29.057 18:20:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:29.057 18:20:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:29.057 18:20:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:29.057 18:20:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:29.057 18:20:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:29.057 18:20:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:29.057 18:20:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:29.057 18:20:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:29.057 18:20:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.057 18:20:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:29.057 18:20:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.057 18:20:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:29.057 "name": "raid_bdev1", 00:22:29.057 "uuid": "45ff06ec-d1fc-4e53-b4f8-cf5ea45ea120", 00:22:29.057 "strip_size_kb": 0, 00:22:29.057 "state": "online", 00:22:29.057 "raid_level": "raid1", 00:22:29.058 "superblock": true, 00:22:29.058 "num_base_bdevs": 2, 00:22:29.058 "num_base_bdevs_discovered": 1, 00:22:29.058 "num_base_bdevs_operational": 1, 00:22:29.058 "base_bdevs_list": [ 00:22:29.058 { 00:22:29.058 "name": null, 00:22:29.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:29.058 "is_configured": false, 00:22:29.058 "data_offset": 0, 00:22:29.058 "data_size": 7936 00:22:29.058 }, 00:22:29.058 { 00:22:29.058 "name": "BaseBdev2", 00:22:29.058 "uuid": "17a14fb6-25d5-5388-a606-4f844b589316", 00:22:29.058 "is_configured": true, 00:22:29.058 "data_offset": 256, 00:22:29.058 "data_size": 7936 00:22:29.058 } 00:22:29.058 ] 00:22:29.058 }' 00:22:29.058 18:20:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:29.058 18:20:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:29.317 18:20:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:29.317 18:20:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:29.317 18:20:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:29.317 18:20:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:29.317 18:20:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:29.317 18:20:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:29.317 18:20:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.317 18:20:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:29.317 18:20:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:29.317 18:20:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.317 18:20:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:29.317 "name": "raid_bdev1", 00:22:29.317 "uuid": "45ff06ec-d1fc-4e53-b4f8-cf5ea45ea120", 00:22:29.317 "strip_size_kb": 0, 00:22:29.317 "state": "online", 00:22:29.317 "raid_level": "raid1", 00:22:29.317 "superblock": true, 00:22:29.317 "num_base_bdevs": 2, 00:22:29.317 "num_base_bdevs_discovered": 1, 00:22:29.317 "num_base_bdevs_operational": 1, 00:22:29.317 "base_bdevs_list": [ 00:22:29.317 { 00:22:29.317 "name": null, 00:22:29.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:29.317 "is_configured": false, 00:22:29.317 "data_offset": 0, 00:22:29.317 "data_size": 7936 00:22:29.317 }, 00:22:29.317 { 00:22:29.317 "name": "BaseBdev2", 00:22:29.317 "uuid": "17a14fb6-25d5-5388-a606-4f844b589316", 00:22:29.317 "is_configured": true, 00:22:29.317 "data_offset": 256, 00:22:29.317 "data_size": 7936 00:22:29.317 } 00:22:29.317 ] 00:22:29.317 }' 00:22:29.317 18:20:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:29.317 18:20:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:29.317 18:20:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:29.317 18:20:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:29.317 18:20:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:29.317 18:20:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:22:29.317 18:20:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:29.317 18:20:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:29.317 18:20:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:29.317 18:20:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:29.317 18:20:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:29.317 18:20:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:29.317 18:20:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.317 18:20:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:29.317 [2024-12-06 18:20:54.814063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:29.317 [2024-12-06 18:20:54.814457] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:29.317 [2024-12-06 18:20:54.814497] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:29.317 request: 00:22:29.317 { 00:22:29.317 "base_bdev": "BaseBdev1", 00:22:29.317 "raid_bdev": "raid_bdev1", 00:22:29.317 "method": "bdev_raid_add_base_bdev", 00:22:29.317 "req_id": 1 00:22:29.317 } 00:22:29.317 Got JSON-RPC error response 00:22:29.317 response: 00:22:29.317 { 00:22:29.317 "code": -22, 00:22:29.317 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:22:29.317 } 00:22:29.317 18:20:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:29.317 18:20:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:22:29.317 18:20:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:29.317 18:20:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:29.318 18:20:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:29.318 18:20:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:22:30.693 18:20:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:30.693 18:20:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:30.693 18:20:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:30.693 18:20:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:30.693 18:20:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:30.693 18:20:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:30.693 18:20:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:30.693 18:20:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:30.693 18:20:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:30.693 18:20:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:30.693 18:20:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:30.693 18:20:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:30.693 18:20:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.693 18:20:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:30.693 18:20:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.693 18:20:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:30.693 "name": "raid_bdev1", 00:22:30.693 "uuid": "45ff06ec-d1fc-4e53-b4f8-cf5ea45ea120", 00:22:30.693 "strip_size_kb": 0, 00:22:30.693 "state": "online", 00:22:30.693 "raid_level": "raid1", 00:22:30.693 "superblock": true, 00:22:30.693 "num_base_bdevs": 2, 00:22:30.693 "num_base_bdevs_discovered": 1, 00:22:30.693 "num_base_bdevs_operational": 1, 00:22:30.693 "base_bdevs_list": [ 00:22:30.693 { 00:22:30.693 "name": null, 00:22:30.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:30.693 "is_configured": false, 00:22:30.693 "data_offset": 0, 00:22:30.693 "data_size": 7936 00:22:30.693 }, 00:22:30.693 { 00:22:30.693 "name": "BaseBdev2", 00:22:30.693 "uuid": "17a14fb6-25d5-5388-a606-4f844b589316", 00:22:30.693 "is_configured": true, 00:22:30.693 "data_offset": 256, 00:22:30.693 "data_size": 7936 00:22:30.693 } 00:22:30.693 ] 00:22:30.693 }' 00:22:30.693 18:20:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:30.693 18:20:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:30.952 18:20:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:30.952 18:20:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:30.952 18:20:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:30.952 18:20:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:30.952 18:20:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:30.952 18:20:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:30.952 18:20:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:30.952 18:20:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.952 18:20:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:30.952 18:20:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.952 18:20:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:30.952 "name": "raid_bdev1", 00:22:30.952 "uuid": "45ff06ec-d1fc-4e53-b4f8-cf5ea45ea120", 00:22:30.952 "strip_size_kb": 0, 00:22:30.952 "state": "online", 00:22:30.952 "raid_level": "raid1", 00:22:30.952 "superblock": true, 00:22:30.952 "num_base_bdevs": 2, 00:22:30.952 "num_base_bdevs_discovered": 1, 00:22:30.952 "num_base_bdevs_operational": 1, 00:22:30.952 "base_bdevs_list": [ 00:22:30.952 { 00:22:30.952 "name": null, 00:22:30.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:30.952 "is_configured": false, 00:22:30.952 "data_offset": 0, 00:22:30.952 "data_size": 7936 00:22:30.952 }, 00:22:30.952 { 00:22:30.952 "name": "BaseBdev2", 00:22:30.952 "uuid": "17a14fb6-25d5-5388-a606-4f844b589316", 00:22:30.952 "is_configured": true, 00:22:30.952 "data_offset": 256, 00:22:30.952 "data_size": 7936 00:22:30.952 } 00:22:30.952 ] 00:22:30.952 }' 00:22:30.952 18:20:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:30.952 18:20:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:30.952 18:20:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:30.952 18:20:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:30.952 18:20:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89558 00:22:30.952 18:20:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89558 ']' 00:22:30.952 18:20:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89558 00:22:30.952 18:20:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:22:30.952 18:20:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:31.211 18:20:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89558 00:22:31.211 killing process with pid 89558 00:22:31.211 Received shutdown signal, test time was about 60.000000 seconds 00:22:31.211 00:22:31.211 Latency(us) 00:22:31.211 [2024-12-06T18:20:56.731Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.211 [2024-12-06T18:20:56.731Z] =================================================================================================================== 00:22:31.211 [2024-12-06T18:20:56.731Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:31.211 18:20:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:31.211 18:20:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:31.211 18:20:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89558' 00:22:31.211 18:20:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89558 00:22:31.211 [2024-12-06 18:20:56.494576] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:31.211 18:20:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89558 00:22:31.211 [2024-12-06 18:20:56.494763] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:31.211 [2024-12-06 18:20:56.494859] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:31.211 [2024-12-06 18:20:56.494893] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:22:31.470 [2024-12-06 18:20:56.783781] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:32.423 ************************************ 00:22:32.423 END TEST raid_rebuild_test_sb_md_interleaved 00:22:32.423 ************************************ 00:22:32.423 18:20:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:22:32.423 00:22:32.423 real 0m18.561s 00:22:32.423 user 0m25.097s 00:22:32.423 sys 0m1.523s 00:22:32.423 18:20:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:32.423 18:20:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:32.702 18:20:57 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:22:32.702 18:20:57 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:22:32.702 18:20:57 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89558 ']' 00:22:32.702 18:20:57 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89558 00:22:32.702 18:20:57 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:22:32.702 00:22:32.702 real 13m3.965s 00:22:32.702 user 18m28.751s 00:22:32.702 sys 1m45.037s 00:22:32.702 18:20:57 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:32.702 18:20:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:32.702 ************************************ 00:22:32.702 END TEST bdev_raid 00:22:32.702 ************************************ 00:22:32.702 18:20:58 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:22:32.702 18:20:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:32.702 18:20:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:32.702 18:20:58 -- common/autotest_common.sh@10 -- # set +x 00:22:32.702 ************************************ 00:22:32.702 START TEST spdkcli_raid 00:22:32.702 ************************************ 00:22:32.702 18:20:58 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:22:32.702 * Looking for test storage... 00:22:32.702 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:22:32.702 18:20:58 spdkcli_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:32.702 18:20:58 spdkcli_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:22:32.702 18:20:58 spdkcli_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:32.702 18:20:58 spdkcli_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:32.702 18:20:58 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:32.702 18:20:58 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:32.702 18:20:58 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:32.702 18:20:58 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:22:32.702 18:20:58 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:22:32.702 18:20:58 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:22:32.702 18:20:58 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:22:32.702 18:20:58 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:22:32.702 18:20:58 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:22:32.702 18:20:58 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:22:32.702 18:20:58 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:32.702 18:20:58 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:22:32.702 18:20:58 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:22:32.702 18:20:58 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:32.702 18:20:58 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:32.702 18:20:58 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:22:32.702 18:20:58 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:22:32.702 18:20:58 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:32.702 18:20:58 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:22:32.702 18:20:58 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:22:32.702 18:20:58 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:22:32.702 18:20:58 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:22:32.702 18:20:58 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:32.702 18:20:58 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:22:32.702 18:20:58 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:22:32.702 18:20:58 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:32.702 18:20:58 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:32.702 18:20:58 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:22:32.702 18:20:58 spdkcli_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:32.702 18:20:58 spdkcli_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:32.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.702 --rc genhtml_branch_coverage=1 00:22:32.702 --rc genhtml_function_coverage=1 00:22:32.702 --rc genhtml_legend=1 00:22:32.702 --rc geninfo_all_blocks=1 00:22:32.702 --rc geninfo_unexecuted_blocks=1 00:22:32.702 00:22:32.702 ' 00:22:32.702 18:20:58 spdkcli_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:32.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.702 --rc genhtml_branch_coverage=1 00:22:32.702 --rc genhtml_function_coverage=1 00:22:32.702 --rc genhtml_legend=1 00:22:32.702 --rc geninfo_all_blocks=1 00:22:32.702 --rc geninfo_unexecuted_blocks=1 00:22:32.702 00:22:32.702 ' 00:22:32.702 18:20:58 spdkcli_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:32.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.702 --rc genhtml_branch_coverage=1 00:22:32.702 --rc genhtml_function_coverage=1 00:22:32.702 --rc genhtml_legend=1 00:22:32.702 --rc geninfo_all_blocks=1 00:22:32.702 --rc geninfo_unexecuted_blocks=1 00:22:32.702 00:22:32.702 ' 00:22:32.702 18:20:58 spdkcli_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:32.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.702 --rc genhtml_branch_coverage=1 00:22:32.702 --rc genhtml_function_coverage=1 00:22:32.702 --rc genhtml_legend=1 00:22:32.702 --rc geninfo_all_blocks=1 00:22:32.702 --rc geninfo_unexecuted_blocks=1 00:22:32.702 00:22:32.702 ' 00:22:32.702 18:20:58 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:22:32.702 18:20:58 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:22:32.702 18:20:58 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:22:32.702 18:20:58 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:22:32.702 18:20:58 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:22:32.702 18:20:58 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:22:32.702 18:20:58 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:22:32.702 18:20:58 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:22:32.702 18:20:58 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:22:32.702 18:20:58 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:22:32.702 18:20:58 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:22:32.702 18:20:58 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:22:32.702 18:20:58 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:22:32.702 18:20:58 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:22:32.702 18:20:58 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:22:32.702 18:20:58 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:22:32.702 18:20:58 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:22:32.702 18:20:58 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:22:32.702 18:20:58 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:22:32.702 18:20:58 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:22:32.702 18:20:58 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:22:32.961 18:20:58 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:22:32.961 18:20:58 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:22:32.961 18:20:58 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:22:32.961 18:20:58 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:22:32.961 18:20:58 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:22:32.961 18:20:58 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:22:32.961 18:20:58 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:22:32.961 18:20:58 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:22:32.961 18:20:58 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:22:32.962 18:20:58 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:22:32.962 18:20:58 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:22:32.962 18:20:58 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:22:32.962 18:20:58 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:32.962 18:20:58 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:32.962 18:20:58 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:22:32.962 18:20:58 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90239 00:22:32.962 18:20:58 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:22:32.962 18:20:58 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90239 00:22:32.962 18:20:58 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 90239 ']' 00:22:32.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:32.962 18:20:58 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.962 18:20:58 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:32.962 18:20:58 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.962 18:20:58 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:32.962 18:20:58 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:32.962 [2024-12-06 18:20:58.385673] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:22:32.962 [2024-12-06 18:20:58.385985] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90239 ] 00:22:33.220 [2024-12-06 18:20:58.594149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:33.479 [2024-12-06 18:20:58.757846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:33.479 [2024-12-06 18:20:58.757868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:34.415 18:20:59 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:34.415 18:20:59 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:22:34.415 18:20:59 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:22:34.415 18:20:59 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:34.415 18:20:59 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:34.415 18:20:59 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:22:34.415 18:20:59 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:34.415 18:20:59 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:34.415 18:20:59 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:22:34.415 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:22:34.415 ' 00:22:35.790 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:22:35.790 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:22:36.055 18:21:01 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:22:36.055 18:21:01 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:36.055 18:21:01 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:36.055 18:21:01 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:22:36.055 18:21:01 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:36.055 18:21:01 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:36.055 18:21:01 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:22:36.055 ' 00:22:36.994 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:22:37.252 18:21:02 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:22:37.252 18:21:02 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:37.252 18:21:02 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:37.252 18:21:02 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:22:37.252 18:21:02 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:37.252 18:21:02 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:37.252 18:21:02 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:22:37.252 18:21:02 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:22:37.819 18:21:03 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:22:37.819 18:21:03 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:22:37.819 18:21:03 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:22:37.819 18:21:03 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:37.819 18:21:03 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:37.819 18:21:03 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:22:37.819 18:21:03 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:37.819 18:21:03 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:37.819 18:21:03 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:22:37.819 ' 00:22:38.754 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:22:39.013 18:21:04 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:22:39.013 18:21:04 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:39.013 18:21:04 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:39.013 18:21:04 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:22:39.013 18:21:04 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:39.013 18:21:04 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:39.013 18:21:04 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:22:39.013 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:22:39.013 ' 00:22:40.400 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:22:40.400 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:22:40.658 18:21:05 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:22:40.658 18:21:05 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:40.658 18:21:05 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:40.658 18:21:05 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90239 00:22:40.658 18:21:05 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90239 ']' 00:22:40.658 18:21:05 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90239 00:22:40.658 18:21:05 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:22:40.658 18:21:05 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:40.658 18:21:05 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90239 00:22:40.658 killing process with pid 90239 00:22:40.658 18:21:05 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:40.658 18:21:05 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:40.658 18:21:05 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90239' 00:22:40.658 18:21:05 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 90239 00:22:40.658 18:21:05 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 90239 00:22:43.238 18:21:08 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:22:43.238 18:21:08 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90239 ']' 00:22:43.238 18:21:08 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90239 00:22:43.238 18:21:08 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90239 ']' 00:22:43.238 Process with pid 90239 is not found 00:22:43.238 18:21:08 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90239 00:22:43.238 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (90239) - No such process 00:22:43.238 18:21:08 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 90239 is not found' 00:22:43.238 18:21:08 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:22:43.238 18:21:08 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:22:43.238 18:21:08 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:22:43.238 18:21:08 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:22:43.238 ************************************ 00:22:43.238 END TEST spdkcli_raid 00:22:43.238 ************************************ 00:22:43.238 00:22:43.238 real 0m10.387s 00:22:43.238 user 0m21.354s 00:22:43.238 sys 0m1.121s 00:22:43.238 18:21:08 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:43.238 18:21:08 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:43.238 18:21:08 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:22:43.238 18:21:08 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:43.238 18:21:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:43.238 18:21:08 -- common/autotest_common.sh@10 -- # set +x 00:22:43.238 ************************************ 00:22:43.238 START TEST blockdev_raid5f 00:22:43.238 ************************************ 00:22:43.238 18:21:08 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:22:43.238 * Looking for test storage... 00:22:43.238 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:22:43.238 18:21:08 blockdev_raid5f -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:43.238 18:21:08 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lcov --version 00:22:43.238 18:21:08 blockdev_raid5f -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:43.238 18:21:08 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:43.238 18:21:08 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:43.238 18:21:08 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:43.238 18:21:08 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:43.238 18:21:08 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:22:43.238 18:21:08 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:22:43.238 18:21:08 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:22:43.238 18:21:08 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:22:43.238 18:21:08 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:22:43.238 18:21:08 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:22:43.238 18:21:08 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:22:43.238 18:21:08 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:43.238 18:21:08 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:22:43.238 18:21:08 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:22:43.238 18:21:08 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:43.238 18:21:08 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:43.238 18:21:08 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:22:43.238 18:21:08 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:22:43.238 18:21:08 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:43.238 18:21:08 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:22:43.238 18:21:08 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:22:43.238 18:21:08 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:22:43.238 18:21:08 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:22:43.238 18:21:08 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:43.238 18:21:08 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:22:43.238 18:21:08 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:22:43.238 18:21:08 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:43.238 18:21:08 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:43.238 18:21:08 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:22:43.238 18:21:08 blockdev_raid5f -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:43.238 18:21:08 blockdev_raid5f -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:43.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.238 --rc genhtml_branch_coverage=1 00:22:43.238 --rc genhtml_function_coverage=1 00:22:43.238 --rc genhtml_legend=1 00:22:43.238 --rc geninfo_all_blocks=1 00:22:43.238 --rc geninfo_unexecuted_blocks=1 00:22:43.238 00:22:43.238 ' 00:22:43.238 18:21:08 blockdev_raid5f -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:43.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.238 --rc genhtml_branch_coverage=1 00:22:43.238 --rc genhtml_function_coverage=1 00:22:43.238 --rc genhtml_legend=1 00:22:43.238 --rc geninfo_all_blocks=1 00:22:43.238 --rc geninfo_unexecuted_blocks=1 00:22:43.238 00:22:43.238 ' 00:22:43.238 18:21:08 blockdev_raid5f -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:43.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.238 --rc genhtml_branch_coverage=1 00:22:43.238 --rc genhtml_function_coverage=1 00:22:43.238 --rc genhtml_legend=1 00:22:43.238 --rc geninfo_all_blocks=1 00:22:43.238 --rc geninfo_unexecuted_blocks=1 00:22:43.238 00:22:43.238 ' 00:22:43.238 18:21:08 blockdev_raid5f -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:43.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.238 --rc genhtml_branch_coverage=1 00:22:43.238 --rc genhtml_function_coverage=1 00:22:43.238 --rc genhtml_legend=1 00:22:43.238 --rc geninfo_all_blocks=1 00:22:43.238 --rc geninfo_unexecuted_blocks=1 00:22:43.238 00:22:43.238 ' 00:22:43.238 18:21:08 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:22:43.238 18:21:08 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:22:43.238 18:21:08 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:22:43.238 18:21:08 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:22:43.238 18:21:08 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:22:43.238 18:21:08 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:22:43.238 18:21:08 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:22:43.238 18:21:08 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:22:43.238 18:21:08 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:22:43.238 18:21:08 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:22:43.238 18:21:08 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:22:43.238 18:21:08 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:22:43.238 18:21:08 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:22:43.238 18:21:08 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:22:43.238 18:21:08 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:22:43.238 18:21:08 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:22:43.238 18:21:08 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:22:43.238 18:21:08 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:22:43.238 18:21:08 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:22:43.238 18:21:08 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:22:43.238 18:21:08 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:22:43.238 18:21:08 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:22:43.238 18:21:08 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:22:43.238 18:21:08 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:22:43.238 18:21:08 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90515 00:22:43.238 18:21:08 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:22:43.238 18:21:08 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:22:43.238 18:21:08 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90515 00:22:43.238 18:21:08 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 90515 ']' 00:22:43.238 18:21:08 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:43.238 18:21:08 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:43.238 18:21:08 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:43.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:43.238 18:21:08 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:43.238 18:21:08 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:43.496 [2024-12-06 18:21:08.776442] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:22:43.496 [2024-12-06 18:21:08.776869] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90515 ] 00:22:43.496 [2024-12-06 18:21:08.962559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.753 [2024-12-06 18:21:09.121194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.688 18:21:10 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:44.688 18:21:10 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:22:44.688 18:21:10 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:22:44.688 18:21:10 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:22:44.688 18:21:10 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:22:44.688 18:21:10 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.688 18:21:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:44.688 Malloc0 00:22:44.688 Malloc1 00:22:44.688 Malloc2 00:22:44.688 18:21:10 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.688 18:21:10 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:22:44.688 18:21:10 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.688 18:21:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:44.947 18:21:10 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.947 18:21:10 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:22:44.947 18:21:10 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:22:44.947 18:21:10 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.947 18:21:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:44.947 18:21:10 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.947 18:21:10 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:22:44.947 18:21:10 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.947 18:21:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:44.947 18:21:10 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.947 18:21:10 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:22:44.947 18:21:10 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.947 18:21:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:44.947 18:21:10 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.947 18:21:10 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:22:44.947 18:21:10 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:22:44.947 18:21:10 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:22:44.947 18:21:10 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.947 18:21:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:44.947 18:21:10 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.947 18:21:10 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:22:44.947 18:21:10 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "49a6d1d7-9ac5-4ead-8d5c-8bff0714319f"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "49a6d1d7-9ac5-4ead-8d5c-8bff0714319f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "49a6d1d7-9ac5-4ead-8d5c-8bff0714319f",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "845f3a24-dadb-42f6-b910-683957f6c65b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "d28b3bd8-ee09-4061-9745-2facfeaa2819",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "515f2e4e-5384-4524-9fe0-3b4a1056a105",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:22:44.947 18:21:10 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:22:44.947 18:21:10 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:22:44.947 18:21:10 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:22:44.947 18:21:10 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:22:44.947 18:21:10 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 90515 00:22:44.947 18:21:10 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 90515 ']' 00:22:44.947 18:21:10 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 90515 00:22:44.947 18:21:10 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:22:44.947 18:21:10 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:44.947 18:21:10 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90515 00:22:44.947 killing process with pid 90515 00:22:44.947 18:21:10 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:44.947 18:21:10 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:44.947 18:21:10 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90515' 00:22:44.947 18:21:10 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 90515 00:22:44.947 18:21:10 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 90515 00:22:48.236 18:21:13 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:48.236 18:21:13 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:22:48.236 18:21:13 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:22:48.236 18:21:13 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:48.236 18:21:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:48.236 ************************************ 00:22:48.236 START TEST bdev_hello_world 00:22:48.236 ************************************ 00:22:48.236 18:21:13 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:22:48.236 [2024-12-06 18:21:13.166927] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:22:48.236 [2024-12-06 18:21:13.167385] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90589 ] 00:22:48.236 [2024-12-06 18:21:13.340669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.236 [2024-12-06 18:21:13.475786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:48.804 [2024-12-06 18:21:14.062148] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:22:48.804 [2024-12-06 18:21:14.062461] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:22:48.804 [2024-12-06 18:21:14.062505] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:22:48.804 [2024-12-06 18:21:14.063217] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:22:48.804 [2024-12-06 18:21:14.063450] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:22:48.804 [2024-12-06 18:21:14.063478] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:22:48.804 [2024-12-06 18:21:14.063559] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:22:48.804 00:22:48.804 [2024-12-06 18:21:14.063587] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:22:50.180 00:22:50.180 real 0m2.344s 00:22:50.180 user 0m1.866s 00:22:50.180 sys 0m0.353s 00:22:50.180 18:21:15 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:50.180 ************************************ 00:22:50.180 END TEST bdev_hello_world 00:22:50.180 ************************************ 00:22:50.180 18:21:15 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:22:50.180 18:21:15 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:22:50.180 18:21:15 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:50.180 18:21:15 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:50.180 18:21:15 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:50.180 ************************************ 00:22:50.180 START TEST bdev_bounds 00:22:50.180 ************************************ 00:22:50.180 Process bdevio pid: 90631 00:22:50.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:50.180 18:21:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:22:50.180 18:21:15 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90631 00:22:50.180 18:21:15 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:22:50.180 18:21:15 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90631' 00:22:50.180 18:21:15 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90631 00:22:50.180 18:21:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 90631 ']' 00:22:50.180 18:21:15 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:22:50.180 18:21:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.180 18:21:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:50.180 18:21:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.180 18:21:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:50.180 18:21:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:22:50.180 [2024-12-06 18:21:15.582439] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:22:50.180 [2024-12-06 18:21:15.582648] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90631 ] 00:22:50.439 [2024-12-06 18:21:15.770314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:50.439 [2024-12-06 18:21:15.917291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:50.439 [2024-12-06 18:21:15.917403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:50.439 [2024-12-06 18:21:15.917417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:51.375 18:21:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:51.375 18:21:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:22:51.375 18:21:16 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:22:51.375 I/O targets: 00:22:51.375 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:22:51.375 00:22:51.375 00:22:51.375 CUnit - A unit testing framework for C - Version 2.1-3 00:22:51.375 http://cunit.sourceforge.net/ 00:22:51.375 00:22:51.375 00:22:51.375 Suite: bdevio tests on: raid5f 00:22:51.375 Test: blockdev write read block ...passed 00:22:51.375 Test: blockdev write zeroes read block ...passed 00:22:51.375 Test: blockdev write zeroes read no split ...passed 00:22:51.375 Test: blockdev write zeroes read split ...passed 00:22:51.634 Test: blockdev write zeroes read split partial ...passed 00:22:51.634 Test: blockdev reset ...passed 00:22:51.634 Test: blockdev write read 8 blocks ...passed 00:22:51.634 Test: blockdev write read size > 128k ...passed 00:22:51.634 Test: blockdev write read invalid size ...passed 00:22:51.634 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:51.634 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:51.634 Test: blockdev write read max offset ...passed 00:22:51.634 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:51.634 Test: blockdev writev readv 8 blocks ...passed 00:22:51.634 Test: blockdev writev readv 30 x 1block ...passed 00:22:51.634 Test: blockdev writev readv block ...passed 00:22:51.634 Test: blockdev writev readv size > 128k ...passed 00:22:51.634 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:51.634 Test: blockdev comparev and writev ...passed 00:22:51.634 Test: blockdev nvme passthru rw ...passed 00:22:51.634 Test: blockdev nvme passthru vendor specific ...passed 00:22:51.634 Test: blockdev nvme admin passthru ...passed 00:22:51.634 Test: blockdev copy ...passed 00:22:51.634 00:22:51.634 Run Summary: Type Total Ran Passed Failed Inactive 00:22:51.634 suites 1 1 n/a 0 0 00:22:51.634 tests 23 23 23 0 0 00:22:51.634 asserts 130 130 130 0 n/a 00:22:51.634 00:22:51.634 Elapsed time = 0.573 seconds 00:22:51.634 0 00:22:51.634 18:21:16 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90631 00:22:51.634 18:21:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 90631 ']' 00:22:51.634 18:21:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 90631 00:22:51.634 18:21:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:22:51.634 18:21:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:51.634 18:21:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90631 00:22:51.634 18:21:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:51.634 18:21:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:51.634 18:21:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90631' 00:22:51.634 killing process with pid 90631 00:22:51.634 18:21:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 90631 00:22:51.634 18:21:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 90631 00:22:53.009 18:21:18 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:22:53.009 ************************************ 00:22:53.009 END TEST bdev_bounds 00:22:53.009 ************************************ 00:22:53.009 00:22:53.009 real 0m2.987s 00:22:53.009 user 0m7.320s 00:22:53.009 sys 0m0.493s 00:22:53.009 18:21:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:53.009 18:21:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:22:53.009 18:21:18 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:22:53.009 18:21:18 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:53.009 18:21:18 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:53.009 18:21:18 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:53.009 ************************************ 00:22:53.009 START TEST bdev_nbd 00:22:53.009 ************************************ 00:22:53.009 18:21:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:22:53.009 18:21:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:22:53.009 18:21:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:22:53.009 18:21:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:53.009 18:21:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:22:53.009 18:21:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:22:53.009 18:21:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:22:53.009 18:21:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:22:53.009 18:21:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:22:53.009 18:21:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:22:53.009 18:21:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:22:53.009 18:21:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:22:53.009 18:21:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:22:53.009 18:21:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:22:53.009 18:21:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:22:53.009 18:21:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:22:53.009 18:21:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90692 00:22:53.009 18:21:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:22:53.009 18:21:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:22:53.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:22:53.009 18:21:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90692 /var/tmp/spdk-nbd.sock 00:22:53.009 18:21:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90692 ']' 00:22:53.009 18:21:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:22:53.009 18:21:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:53.009 18:21:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:22:53.009 18:21:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:53.009 18:21:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:22:53.267 [2024-12-06 18:21:18.650666] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:22:53.267 [2024-12-06 18:21:18.650990] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:53.525 [2024-12-06 18:21:18.843096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.525 [2024-12-06 18:21:18.987000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:54.091 18:21:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:54.091 18:21:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:22:54.091 18:21:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:22:54.091 18:21:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:54.091 18:21:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:22:54.092 18:21:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:22:54.092 18:21:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:22:54.092 18:21:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:54.092 18:21:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:22:54.092 18:21:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:22:54.092 18:21:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:22:54.092 18:21:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:22:54.092 18:21:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:22:54.092 18:21:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:22:54.092 18:21:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:22:54.658 18:21:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:22:54.658 18:21:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:22:54.658 18:21:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:22:54.658 18:21:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:54.658 18:21:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:54.658 18:21:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:54.658 18:21:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:54.658 18:21:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:54.658 18:21:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:54.658 18:21:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:54.659 18:21:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:54.659 18:21:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:54.659 1+0 records in 00:22:54.659 1+0 records out 00:22:54.659 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412072 s, 9.9 MB/s 00:22:54.659 18:21:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:54.659 18:21:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:54.659 18:21:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:54.659 18:21:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:54.659 18:21:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:54.659 18:21:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:22:54.659 18:21:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:22:54.659 18:21:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:54.918 18:21:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:22:54.918 { 00:22:54.918 "nbd_device": "/dev/nbd0", 00:22:54.918 "bdev_name": "raid5f" 00:22:54.918 } 00:22:54.918 ]' 00:22:54.918 18:21:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:22:54.918 18:21:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:22:54.918 { 00:22:54.918 "nbd_device": "/dev/nbd0", 00:22:54.918 "bdev_name": "raid5f" 00:22:54.918 } 00:22:54.918 ]' 00:22:54.918 18:21:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:22:54.918 18:21:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:22:54.918 18:21:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:54.918 18:21:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:54.918 18:21:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:54.918 18:21:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:22:54.918 18:21:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:54.918 18:21:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:22:55.177 18:21:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:55.177 18:21:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:55.177 18:21:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:55.177 18:21:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:55.177 18:21:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:55.177 18:21:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:55.177 18:21:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:55.177 18:21:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:55.177 18:21:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:55.177 18:21:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:55.177 18:21:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:55.436 18:21:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:22:55.436 18:21:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:22:55.436 18:21:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:55.696 18:21:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:22:55.696 18:21:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:22:55.696 18:21:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:55.696 18:21:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:22:55.696 18:21:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:22:55.696 18:21:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:22:55.696 18:21:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:22:55.696 18:21:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:22:55.696 18:21:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:22:55.696 18:21:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:22:55.696 18:21:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:55.696 18:21:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:22:55.696 18:21:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:22:55.696 18:21:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:22:55.696 18:21:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:22:55.696 18:21:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:22:55.696 18:21:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:55.696 18:21:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:22:55.696 18:21:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:55.696 18:21:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:55.696 18:21:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:55.696 18:21:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:22:55.696 18:21:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:55.696 18:21:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:55.696 18:21:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:22:55.987 /dev/nbd0 00:22:55.987 18:21:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:55.987 18:21:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:55.987 18:21:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:55.987 18:21:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:55.987 18:21:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:55.987 18:21:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:55.987 18:21:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:55.987 18:21:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:55.987 18:21:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:55.987 18:21:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:55.987 18:21:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:55.987 1+0 records in 00:22:55.987 1+0 records out 00:22:55.987 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000441064 s, 9.3 MB/s 00:22:55.987 18:21:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:55.987 18:21:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:55.987 18:21:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:55.987 18:21:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:55.987 18:21:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:55.987 18:21:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:55.987 18:21:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:55.987 18:21:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:55.987 18:21:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:55.988 18:21:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:56.277 18:21:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:22:56.277 { 00:22:56.277 "nbd_device": "/dev/nbd0", 00:22:56.277 "bdev_name": "raid5f" 00:22:56.277 } 00:22:56.277 ]' 00:22:56.277 18:21:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:56.277 18:21:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:22:56.277 { 00:22:56.277 "nbd_device": "/dev/nbd0", 00:22:56.277 "bdev_name": "raid5f" 00:22:56.277 } 00:22:56.277 ]' 00:22:56.277 18:21:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:22:56.277 18:21:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:22:56.277 18:21:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:56.277 18:21:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:22:56.277 18:21:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:22:56.277 18:21:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:22:56.277 18:21:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:22:56.277 18:21:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:22:56.277 18:21:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:22:56.277 18:21:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:22:56.277 18:21:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:22:56.278 18:21:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:22:56.278 18:21:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:22:56.278 18:21:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:22:56.278 256+0 records in 00:22:56.278 256+0 records out 00:22:56.278 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108456 s, 96.7 MB/s 00:22:56.278 18:21:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:56.278 18:21:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:22:56.278 256+0 records in 00:22:56.278 256+0 records out 00:22:56.278 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0404211 s, 25.9 MB/s 00:22:56.278 18:21:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:22:56.278 18:21:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:22:56.278 18:21:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:22:56.278 18:21:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:22:56.278 18:21:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:22:56.278 18:21:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:22:56.278 18:21:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:22:56.278 18:21:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:56.278 18:21:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:22:56.278 18:21:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:22:56.278 18:21:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:22:56.278 18:21:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:56.278 18:21:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:56.278 18:21:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:56.278 18:21:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:22:56.278 18:21:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:56.278 18:21:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:22:56.846 18:21:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:56.846 18:21:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:56.846 18:21:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:56.846 18:21:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:56.846 18:21:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:56.846 18:21:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:56.846 18:21:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:56.846 18:21:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:56.846 18:21:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:56.846 18:21:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:56.846 18:21:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:56.846 18:21:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:22:56.846 18:21:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:56.846 18:21:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:22:57.106 18:21:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:22:57.106 18:21:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:22:57.106 18:21:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:57.106 18:21:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:22:57.106 18:21:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:22:57.106 18:21:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:22:57.106 18:21:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:22:57.106 18:21:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:22:57.106 18:21:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:22:57.106 18:21:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:22:57.106 18:21:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:57.106 18:21:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:22:57.106 18:21:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:22:57.365 malloc_lvol_verify 00:22:57.365 18:21:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:22:57.625 8bccc8d9-658c-4538-8ad3-d940c4ba77e7 00:22:57.625 18:21:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:22:57.884 ccc53f06-dfd9-4bac-a44d-18629711d6bd 00:22:57.884 18:21:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:22:58.450 /dev/nbd0 00:22:58.450 18:21:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:22:58.450 18:21:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:22:58.450 18:21:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:22:58.450 18:21:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:22:58.450 18:21:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:22:58.450 mke2fs 1.47.0 (5-Feb-2023) 00:22:58.450 Discarding device blocks: 0/4096 done 00:22:58.450 Creating filesystem with 4096 1k blocks and 1024 inodes 00:22:58.450 00:22:58.450 Allocating group tables: 0/1 done 00:22:58.450 Writing inode tables: 0/1 done 00:22:58.450 Creating journal (1024 blocks): done 00:22:58.450 Writing superblocks and filesystem accounting information: 0/1 done 00:22:58.450 00:22:58.450 18:21:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:22:58.450 18:21:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:58.450 18:21:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:58.450 18:21:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:58.450 18:21:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:22:58.450 18:21:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:58.450 18:21:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:22:58.709 18:21:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:58.709 18:21:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:58.709 18:21:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:58.709 18:21:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:58.709 18:21:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:58.709 18:21:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:58.709 18:21:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:58.709 18:21:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:58.709 18:21:24 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90692 00:22:58.709 18:21:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90692 ']' 00:22:58.709 18:21:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90692 00:22:58.709 18:21:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:22:58.709 18:21:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:58.709 18:21:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90692 00:22:58.709 18:21:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:58.709 18:21:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:58.709 killing process with pid 90692 00:22:58.709 18:21:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90692' 00:22:58.709 18:21:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90692 00:22:58.709 18:21:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90692 00:23:00.086 ************************************ 00:23:00.086 END TEST bdev_nbd 00:23:00.086 ************************************ 00:23:00.086 18:21:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:23:00.086 00:23:00.086 real 0m7.013s 00:23:00.086 user 0m10.040s 00:23:00.086 sys 0m1.587s 00:23:00.086 18:21:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:00.086 18:21:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:23:00.086 18:21:25 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:23:00.086 18:21:25 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:23:00.086 18:21:25 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:23:00.086 18:21:25 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:23:00.086 18:21:25 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:00.086 18:21:25 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:00.086 18:21:25 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:00.086 ************************************ 00:23:00.086 START TEST bdev_fio 00:23:00.086 ************************************ 00:23:00.086 18:21:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:23:00.086 18:21:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:23:00.086 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:23:00.086 18:21:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:23:00.086 18:21:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:23:00.086 18:21:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:23:00.086 18:21:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:23:00.086 18:21:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:23:00.086 18:21:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:23:00.086 18:21:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:00.086 18:21:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:23:00.086 18:21:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:23:00.086 18:21:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:23:00.086 18:21:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:23:00.086 18:21:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:23:00.086 18:21:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:23:00.086 18:21:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:23:00.086 18:21:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:00.086 18:21:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:23:00.086 18:21:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:23:00.086 18:21:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:23:00.086 18:21:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:23:00.086 18:21:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:23:00.344 18:21:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:23:00.344 18:21:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:23:00.344 18:21:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:23:00.344 18:21:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:23:00.344 18:21:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:23:00.344 18:21:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:23:00.344 18:21:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:23:00.344 18:21:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:23:00.344 18:21:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:00.344 18:21:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:23:00.344 ************************************ 00:23:00.344 START TEST bdev_fio_rw_verify 00:23:00.344 ************************************ 00:23:00.344 18:21:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:23:00.344 18:21:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:23:00.344 18:21:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:00.344 18:21:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:00.344 18:21:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:00.344 18:21:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:00.344 18:21:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:23:00.345 18:21:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:00.345 18:21:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:00.345 18:21:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:00.345 18:21:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:23:00.345 18:21:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:00.345 18:21:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:00.345 18:21:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:00.345 18:21:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:23:00.345 18:21:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:00.345 18:21:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:23:00.602 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:23:00.602 fio-3.35 00:23:00.602 Starting 1 thread 00:23:12.818 00:23:12.818 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90908: Fri Dec 6 18:21:36 2024 00:23:12.818 read: IOPS=8447, BW=33.0MiB/s (34.6MB/s)(330MiB/10001msec) 00:23:12.818 slat (usec): min=22, max=121, avg=29.28, stdev= 6.88 00:23:12.818 clat (usec): min=13, max=701, avg=188.20, stdev=71.16 00:23:12.818 lat (usec): min=37, max=727, avg=217.48, stdev=71.97 00:23:12.818 clat percentiles (usec): 00:23:12.818 | 50.000th=[ 188], 99.000th=[ 330], 99.900th=[ 379], 99.990th=[ 437], 00:23:12.818 | 99.999th=[ 701] 00:23:12.818 write: IOPS=8879, BW=34.7MiB/s (36.4MB/s)(342MiB/9869msec); 0 zone resets 00:23:12.818 slat (usec): min=10, max=259, avg=23.33, stdev= 6.96 00:23:12.818 clat (usec): min=108, max=1346, avg=434.66, stdev=59.47 00:23:12.818 lat (usec): min=128, max=1605, avg=457.99, stdev=60.85 00:23:12.818 clat percentiles (usec): 00:23:12.818 | 50.000th=[ 437], 99.000th=[ 570], 99.900th=[ 660], 99.990th=[ 1074], 00:23:12.818 | 99.999th=[ 1352] 00:23:12.818 bw ( KiB/s): min=33088, max=37568, per=98.90%, avg=35127.58, stdev=1258.73, samples=19 00:23:12.818 iops : min= 8272, max= 9392, avg=8781.89, stdev=314.68, samples=19 00:23:12.818 lat (usec) : 20=0.01%, 50=0.01%, 100=6.30%, 250=31.27%, 500=56.21% 00:23:12.818 lat (usec) : 750=6.20%, 1000=0.01% 00:23:12.818 lat (msec) : 2=0.01% 00:23:12.818 cpu : usr=98.62%, sys=0.64%, ctx=20, majf=0, minf=7352 00:23:12.818 IO depths : 1=7.7%, 2=20.0%, 4=55.0%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:12.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:12.818 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:12.818 issued rwts: total=84487,87630,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:12.818 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:12.818 00:23:12.818 Run status group 0 (all jobs): 00:23:12.818 READ: bw=33.0MiB/s (34.6MB/s), 33.0MiB/s-33.0MiB/s (34.6MB/s-34.6MB/s), io=330MiB (346MB), run=10001-10001msec 00:23:12.818 WRITE: bw=34.7MiB/s (36.4MB/s), 34.7MiB/s-34.7MiB/s (36.4MB/s-36.4MB/s), io=342MiB (359MB), run=9869-9869msec 00:23:13.388 ----------------------------------------------------- 00:23:13.388 Suppressions used: 00:23:13.388 count bytes template 00:23:13.388 1 7 /usr/src/fio/parse.c 00:23:13.388 526 50496 /usr/src/fio/iolog.c 00:23:13.388 1 8 libtcmalloc_minimal.so 00:23:13.388 1 904 libcrypto.so 00:23:13.388 ----------------------------------------------------- 00:23:13.388 00:23:13.388 00:23:13.388 real 0m13.049s 00:23:13.388 user 0m13.344s 00:23:13.388 sys 0m0.868s 00:23:13.388 18:21:38 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:13.388 18:21:38 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:23:13.388 ************************************ 00:23:13.388 END TEST bdev_fio_rw_verify 00:23:13.388 ************************************ 00:23:13.388 18:21:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:23:13.388 18:21:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:13.388 18:21:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:23:13.388 18:21:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:13.388 18:21:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:23:13.388 18:21:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:23:13.388 18:21:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:23:13.388 18:21:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:23:13.389 18:21:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:23:13.389 18:21:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:23:13.389 18:21:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:23:13.389 18:21:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:13.389 18:21:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:23:13.389 18:21:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:23:13.389 18:21:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:23:13.389 18:21:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:23:13.389 18:21:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "49a6d1d7-9ac5-4ead-8d5c-8bff0714319f"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "49a6d1d7-9ac5-4ead-8d5c-8bff0714319f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "49a6d1d7-9ac5-4ead-8d5c-8bff0714319f",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "845f3a24-dadb-42f6-b910-683957f6c65b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "d28b3bd8-ee09-4061-9745-2facfeaa2819",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "515f2e4e-5384-4524-9fe0-3b4a1056a105",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:23:13.389 18:21:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:23:13.389 18:21:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:23:13.389 18:21:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:13.389 /home/vagrant/spdk_repo/spdk 00:23:13.389 ************************************ 00:23:13.389 END TEST bdev_fio 00:23:13.389 ************************************ 00:23:13.389 18:21:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:23:13.389 18:21:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:23:13.389 18:21:38 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:23:13.389 00:23:13.389 real 0m13.295s 00:23:13.389 user 0m13.446s 00:23:13.389 sys 0m0.975s 00:23:13.389 18:21:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:13.389 18:21:38 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:23:13.648 18:21:38 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:13.648 18:21:38 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:23:13.648 18:21:38 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:23:13.648 18:21:38 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:13.648 18:21:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:13.648 ************************************ 00:23:13.648 START TEST bdev_verify 00:23:13.648 ************************************ 00:23:13.648 18:21:38 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:23:13.648 [2024-12-06 18:21:39.047167] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:23:13.648 [2024-12-06 18:21:39.047354] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91071 ] 00:23:13.949 [2024-12-06 18:21:39.239846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:13.949 [2024-12-06 18:21:39.398685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.949 [2024-12-06 18:21:39.398710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.517 Running I/O for 5 seconds... 00:23:16.828 7446.00 IOPS, 29.09 MiB/s [2024-12-06T18:21:43.281Z] 7682.00 IOPS, 30.01 MiB/s [2024-12-06T18:21:44.216Z] 8280.00 IOPS, 32.34 MiB/s [2024-12-06T18:21:45.152Z] 8669.75 IOPS, 33.87 MiB/s [2024-12-06T18:21:45.152Z] 8655.00 IOPS, 33.81 MiB/s 00:23:19.632 Latency(us) 00:23:19.632 [2024-12-06T18:21:45.152Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.632 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:19.632 Verification LBA range: start 0x0 length 0x2000 00:23:19.632 raid5f : 5.02 4345.37 16.97 0.00 0.00 44679.15 834.09 34793.66 00:23:19.632 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:19.633 Verification LBA range: start 0x2000 length 0x2000 00:23:19.633 raid5f : 5.02 4320.64 16.88 0.00 0.00 44901.48 284.86 34793.66 00:23:19.633 [2024-12-06T18:21:45.153Z] =================================================================================================================== 00:23:19.633 [2024-12-06T18:21:45.153Z] Total : 8666.01 33.85 0.00 0.00 44790.03 284.86 34793.66 00:23:21.533 00:23:21.533 real 0m7.627s 00:23:21.533 user 0m13.856s 00:23:21.533 sys 0m0.426s 00:23:21.533 18:21:46 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:21.533 ************************************ 00:23:21.533 END TEST bdev_verify 00:23:21.533 18:21:46 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:23:21.533 ************************************ 00:23:21.533 18:21:46 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:23:21.533 18:21:46 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:23:21.533 18:21:46 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:21.533 18:21:46 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:21.533 ************************************ 00:23:21.533 START TEST bdev_verify_big_io 00:23:21.533 ************************************ 00:23:21.533 18:21:46 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:23:21.533 [2024-12-06 18:21:46.737841] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:23:21.533 [2024-12-06 18:21:46.738098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91167 ] 00:23:21.533 [2024-12-06 18:21:46.935348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:21.791 [2024-12-06 18:21:47.119656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:21.791 [2024-12-06 18:21:47.119662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:22.359 Running I/O for 5 seconds... 00:23:24.265 506.00 IOPS, 31.62 MiB/s [2024-12-06T18:21:51.163Z] 634.00 IOPS, 39.62 MiB/s [2024-12-06T18:21:52.100Z] 675.33 IOPS, 42.21 MiB/s [2024-12-06T18:21:53.033Z] 697.50 IOPS, 43.59 MiB/s [2024-12-06T18:21:53.033Z] 697.40 IOPS, 43.59 MiB/s 00:23:27.513 Latency(us) 00:23:27.513 [2024-12-06T18:21:53.033Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.513 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:27.513 Verification LBA range: start 0x0 length 0x200 00:23:27.513 raid5f : 5.24 351.19 21.95 0.00 0.00 9009144.35 194.56 398458.88 00:23:27.513 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:27.513 Verification LBA range: start 0x200 length 0x200 00:23:27.513 raid5f : 5.24 351.18 21.95 0.00 0.00 9001687.18 171.29 398458.88 00:23:27.513 [2024-12-06T18:21:53.033Z] =================================================================================================================== 00:23:27.513 [2024-12-06T18:21:53.033Z] Total : 702.36 43.90 0.00 0.00 9005415.77 171.29 398458.88 00:23:28.908 00:23:28.908 real 0m7.810s 00:23:28.908 user 0m14.231s 00:23:28.908 sys 0m0.395s 00:23:28.908 18:21:54 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:28.908 ************************************ 00:23:28.908 END TEST bdev_verify_big_io 00:23:28.908 ************************************ 00:23:28.908 18:21:54 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:23:29.165 18:21:54 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:29.165 18:21:54 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:23:29.165 18:21:54 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:29.165 18:21:54 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:29.165 ************************************ 00:23:29.165 START TEST bdev_write_zeroes 00:23:29.165 ************************************ 00:23:29.165 18:21:54 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:29.165 [2024-12-06 18:21:54.571914] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:23:29.165 [2024-12-06 18:21:54.572079] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91266 ] 00:23:29.423 [2024-12-06 18:21:54.750449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.423 [2024-12-06 18:21:54.895462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.989 Running I/O for 1 seconds... 00:23:31.358 19839.00 IOPS, 77.50 MiB/s 00:23:31.358 Latency(us) 00:23:31.358 [2024-12-06T18:21:56.878Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.358 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:31.358 raid5f : 1.01 19809.57 77.38 0.00 0.00 6435.24 2100.13 8817.57 00:23:31.358 [2024-12-06T18:21:56.878Z] =================================================================================================================== 00:23:31.358 [2024-12-06T18:21:56.878Z] Total : 19809.57 77.38 0.00 0.00 6435.24 2100.13 8817.57 00:23:32.775 00:23:32.775 real 0m3.410s 00:23:32.775 user 0m2.936s 00:23:32.775 sys 0m0.343s 00:23:32.775 18:21:57 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:32.775 ************************************ 00:23:32.775 END TEST bdev_write_zeroes 00:23:32.775 ************************************ 00:23:32.775 18:21:57 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:23:32.775 18:21:57 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:32.776 18:21:57 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:23:32.776 18:21:57 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:32.776 18:21:57 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:32.776 ************************************ 00:23:32.776 START TEST bdev_json_nonenclosed 00:23:32.776 ************************************ 00:23:32.776 18:21:57 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:32.776 [2024-12-06 18:21:58.050546] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:23:32.776 [2024-12-06 18:21:58.050726] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91320 ] 00:23:32.776 [2024-12-06 18:21:58.237651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.033 [2024-12-06 18:21:58.378794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.033 [2024-12-06 18:21:58.378932] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:23:33.033 [2024-12-06 18:21:58.378976] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:23:33.033 [2024-12-06 18:21:58.378991] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:33.291 00:23:33.291 real 0m0.705s 00:23:33.291 user 0m0.441s 00:23:33.291 sys 0m0.158s 00:23:33.291 18:21:58 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:33.291 18:21:58 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:23:33.291 ************************************ 00:23:33.291 END TEST bdev_json_nonenclosed 00:23:33.291 ************************************ 00:23:33.291 18:21:58 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:33.291 18:21:58 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:23:33.291 18:21:58 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:33.291 18:21:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:33.291 ************************************ 00:23:33.291 START TEST bdev_json_nonarray 00:23:33.292 ************************************ 00:23:33.292 18:21:58 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:33.292 [2024-12-06 18:21:58.801748] Starting SPDK v25.01-pre git sha1 60adca7e1 / DPDK 24.03.0 initialization... 00:23:33.292 [2024-12-06 18:21:58.801950] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91346 ] 00:23:33.550 [2024-12-06 18:21:58.985948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.808 [2024-12-06 18:21:59.130887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.808 [2024-12-06 18:21:59.131039] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:23:33.809 [2024-12-06 18:21:59.131084] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:23:33.809 [2024-12-06 18:21:59.131114] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:34.067 00:23:34.067 real 0m0.722s 00:23:34.067 user 0m0.471s 00:23:34.067 sys 0m0.145s 00:23:34.067 18:21:59 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:34.067 18:21:59 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:23:34.067 ************************************ 00:23:34.067 END TEST bdev_json_nonarray 00:23:34.067 ************************************ 00:23:34.067 18:21:59 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:23:34.067 18:21:59 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:23:34.067 18:21:59 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:23:34.067 18:21:59 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:23:34.067 18:21:59 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:23:34.067 18:21:59 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:23:34.067 18:21:59 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:23:34.067 18:21:59 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:23:34.067 18:21:59 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:23:34.067 18:21:59 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:23:34.067 18:21:59 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:23:34.067 00:23:34.067 real 0m50.990s 00:23:34.067 user 1m9.192s 00:23:34.067 sys 0m5.940s 00:23:34.067 18:21:59 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:34.067 18:21:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:34.067 ************************************ 00:23:34.067 END TEST blockdev_raid5f 00:23:34.067 ************************************ 00:23:34.067 18:21:59 -- spdk/autotest.sh@194 -- # uname -s 00:23:34.067 18:21:59 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:23:34.067 18:21:59 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:23:34.067 18:21:59 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:23:34.067 18:21:59 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:23:34.067 18:21:59 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:23:34.067 18:21:59 -- spdk/autotest.sh@260 -- # timing_exit lib 00:23:34.067 18:21:59 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:34.067 18:21:59 -- common/autotest_common.sh@10 -- # set +x 00:23:34.067 18:21:59 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:23:34.067 18:21:59 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:23:34.067 18:21:59 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:23:34.067 18:21:59 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:23:34.067 18:21:59 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:23:34.067 18:21:59 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:23:34.067 18:21:59 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:23:34.067 18:21:59 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:23:34.067 18:21:59 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:23:34.067 18:21:59 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:23:34.067 18:21:59 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:23:34.067 18:21:59 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:23:34.067 18:21:59 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:23:34.067 18:21:59 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:23:34.067 18:21:59 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:23:34.067 18:21:59 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:23:34.067 18:21:59 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:23:34.067 18:21:59 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:23:34.067 18:21:59 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:23:34.067 18:21:59 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:23:34.067 18:21:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:34.067 18:21:59 -- common/autotest_common.sh@10 -- # set +x 00:23:34.067 18:21:59 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:23:34.068 18:21:59 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:23:34.068 18:21:59 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:23:34.068 18:21:59 -- common/autotest_common.sh@10 -- # set +x 00:23:35.970 INFO: APP EXITING 00:23:35.970 INFO: killing all VMs 00:23:35.970 INFO: killing vhost app 00:23:35.970 INFO: EXIT DONE 00:23:36.227 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:36.227 Waiting for block devices as requested 00:23:36.227 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:36.486 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:37.053 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:37.053 Cleaning 00:23:37.053 Removing: /var/run/dpdk/spdk0/config 00:23:37.053 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:23:37.053 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:23:37.053 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:23:37.311 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:23:37.311 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:23:37.311 Removing: /var/run/dpdk/spdk0/hugepage_info 00:23:37.311 Removing: /dev/shm/spdk_tgt_trace.pid56895 00:23:37.311 Removing: /var/run/dpdk/spdk0 00:23:37.311 Removing: /var/run/dpdk/spdk_pid56660 00:23:37.311 Removing: /var/run/dpdk/spdk_pid56895 00:23:37.311 Removing: /var/run/dpdk/spdk_pid57130 00:23:37.311 Removing: /var/run/dpdk/spdk_pid57234 00:23:37.311 Removing: /var/run/dpdk/spdk_pid57284 00:23:37.311 Removing: /var/run/dpdk/spdk_pid57418 00:23:37.311 Removing: /var/run/dpdk/spdk_pid57436 00:23:37.311 Removing: /var/run/dpdk/spdk_pid57646 00:23:37.311 Removing: /var/run/dpdk/spdk_pid57763 00:23:37.311 Removing: /var/run/dpdk/spdk_pid57869 00:23:37.311 Removing: /var/run/dpdk/spdk_pid57992 00:23:37.311 Removing: /var/run/dpdk/spdk_pid58095 00:23:37.311 Removing: /var/run/dpdk/spdk_pid58134 00:23:37.311 Removing: /var/run/dpdk/spdk_pid58176 00:23:37.311 Removing: /var/run/dpdk/spdk_pid58247 00:23:37.311 Removing: /var/run/dpdk/spdk_pid58353 00:23:37.311 Removing: /var/run/dpdk/spdk_pid58824 00:23:37.311 Removing: /var/run/dpdk/spdk_pid58899 00:23:37.311 Removing: /var/run/dpdk/spdk_pid58975 00:23:37.311 Removing: /var/run/dpdk/spdk_pid58997 00:23:37.311 Removing: /var/run/dpdk/spdk_pid59150 00:23:37.311 Removing: /var/run/dpdk/spdk_pid59167 00:23:37.311 Removing: /var/run/dpdk/spdk_pid59316 00:23:37.311 Removing: /var/run/dpdk/spdk_pid59338 00:23:37.311 Removing: /var/run/dpdk/spdk_pid59402 00:23:37.311 Removing: /var/run/dpdk/spdk_pid59420 00:23:37.311 Removing: /var/run/dpdk/spdk_pid59484 00:23:37.311 Removing: /var/run/dpdk/spdk_pid59513 00:23:37.311 Removing: /var/run/dpdk/spdk_pid59708 00:23:37.311 Removing: /var/run/dpdk/spdk_pid59745 00:23:37.311 Removing: /var/run/dpdk/spdk_pid59830 00:23:37.311 Removing: /var/run/dpdk/spdk_pid61217 00:23:37.311 Removing: /var/run/dpdk/spdk_pid61429 00:23:37.311 Removing: /var/run/dpdk/spdk_pid61580 00:23:37.311 Removing: /var/run/dpdk/spdk_pid62234 00:23:37.311 Removing: /var/run/dpdk/spdk_pid62446 00:23:37.311 Removing: /var/run/dpdk/spdk_pid62597 00:23:37.311 Removing: /var/run/dpdk/spdk_pid63246 00:23:37.311 Removing: /var/run/dpdk/spdk_pid63587 00:23:37.311 Removing: /var/run/dpdk/spdk_pid63727 00:23:37.311 Removing: /var/run/dpdk/spdk_pid65142 00:23:37.311 Removing: /var/run/dpdk/spdk_pid65405 00:23:37.311 Removing: /var/run/dpdk/spdk_pid65546 00:23:37.311 Removing: /var/run/dpdk/spdk_pid66959 00:23:37.311 Removing: /var/run/dpdk/spdk_pid67212 00:23:37.311 Removing: /var/run/dpdk/spdk_pid67358 00:23:37.311 Removing: /var/run/dpdk/spdk_pid68772 00:23:37.311 Removing: /var/run/dpdk/spdk_pid69223 00:23:37.311 Removing: /var/run/dpdk/spdk_pid69370 00:23:37.311 Removing: /var/run/dpdk/spdk_pid70887 00:23:37.311 Removing: /var/run/dpdk/spdk_pid71157 00:23:37.311 Removing: /var/run/dpdk/spdk_pid71304 00:23:37.311 Removing: /var/run/dpdk/spdk_pid72817 00:23:37.311 Removing: /var/run/dpdk/spdk_pid73086 00:23:37.311 Removing: /var/run/dpdk/spdk_pid73233 00:23:37.311 Removing: /var/run/dpdk/spdk_pid74742 00:23:37.311 Removing: /var/run/dpdk/spdk_pid75235 00:23:37.311 Removing: /var/run/dpdk/spdk_pid75380 00:23:37.311 Removing: /var/run/dpdk/spdk_pid75524 00:23:37.311 Removing: /var/run/dpdk/spdk_pid75970 00:23:37.311 Removing: /var/run/dpdk/spdk_pid76744 00:23:37.311 Removing: /var/run/dpdk/spdk_pid77126 00:23:37.311 Removing: /var/run/dpdk/spdk_pid77827 00:23:37.311 Removing: /var/run/dpdk/spdk_pid78312 00:23:37.311 Removing: /var/run/dpdk/spdk_pid79114 00:23:37.311 Removing: /var/run/dpdk/spdk_pid79535 00:23:37.311 Removing: /var/run/dpdk/spdk_pid81541 00:23:37.311 Removing: /var/run/dpdk/spdk_pid81987 00:23:37.311 Removing: /var/run/dpdk/spdk_pid82439 00:23:37.311 Removing: /var/run/dpdk/spdk_pid84563 00:23:37.311 Removing: /var/run/dpdk/spdk_pid85061 00:23:37.311 Removing: /var/run/dpdk/spdk_pid85570 00:23:37.311 Removing: /var/run/dpdk/spdk_pid86644 00:23:37.311 Removing: /var/run/dpdk/spdk_pid86977 00:23:37.311 Removing: /var/run/dpdk/spdk_pid87934 00:23:37.311 Removing: /var/run/dpdk/spdk_pid88268 00:23:37.311 Removing: /var/run/dpdk/spdk_pid89228 00:23:37.311 Removing: /var/run/dpdk/spdk_pid89558 00:23:37.570 Removing: /var/run/dpdk/spdk_pid90239 00:23:37.570 Removing: /var/run/dpdk/spdk_pid90515 00:23:37.570 Removing: /var/run/dpdk/spdk_pid90589 00:23:37.570 Removing: /var/run/dpdk/spdk_pid90631 00:23:37.570 Removing: /var/run/dpdk/spdk_pid90893 00:23:37.570 Removing: /var/run/dpdk/spdk_pid91071 00:23:37.570 Removing: /var/run/dpdk/spdk_pid91167 00:23:37.570 Removing: /var/run/dpdk/spdk_pid91266 00:23:37.570 Removing: /var/run/dpdk/spdk_pid91320 00:23:37.570 Removing: /var/run/dpdk/spdk_pid91346 00:23:37.570 Clean 00:23:37.570 18:22:02 -- common/autotest_common.sh@1453 -- # return 0 00:23:37.570 18:22:02 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:23:37.570 18:22:02 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:37.570 18:22:02 -- common/autotest_common.sh@10 -- # set +x 00:23:37.570 18:22:02 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:23:37.570 18:22:02 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:37.570 18:22:02 -- common/autotest_common.sh@10 -- # set +x 00:23:37.570 18:22:03 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:23:37.570 18:22:03 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:23:37.570 18:22:03 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:23:37.570 18:22:03 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:23:37.570 18:22:03 -- spdk/autotest.sh@398 -- # hostname 00:23:37.570 18:22:03 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:23:37.829 geninfo: WARNING: invalid characters removed from testname! 00:24:04.448 18:22:28 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:07.741 18:22:32 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:10.344 18:22:35 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:12.896 18:22:38 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:16.182 18:22:41 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:18.744 18:22:44 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:22.060 18:22:47 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:24:22.060 18:22:47 -- spdk/autorun.sh@1 -- $ timing_finish 00:24:22.060 18:22:47 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:24:22.060 18:22:47 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:24:22.060 18:22:47 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:24:22.060 18:22:47 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:24:22.060 + [[ -n 5205 ]] 00:24:22.060 + sudo kill 5205 00:24:22.069 [Pipeline] } 00:24:22.084 [Pipeline] // timeout 00:24:22.089 [Pipeline] } 00:24:22.103 [Pipeline] // stage 00:24:22.108 [Pipeline] } 00:24:22.122 [Pipeline] // catchError 00:24:22.130 [Pipeline] stage 00:24:22.132 [Pipeline] { (Stop VM) 00:24:22.145 [Pipeline] sh 00:24:22.426 + vagrant halt 00:24:25.712 ==> default: Halting domain... 00:24:32.365 [Pipeline] sh 00:24:32.643 + vagrant destroy -f 00:24:35.954 ==> default: Removing domain... 00:24:35.965 [Pipeline] sh 00:24:36.243 + mv output /var/jenkins/workspace/raid-vg-autotest_2/output 00:24:36.252 [Pipeline] } 00:24:36.267 [Pipeline] // stage 00:24:36.273 [Pipeline] } 00:24:36.287 [Pipeline] // dir 00:24:36.292 [Pipeline] } 00:24:36.304 [Pipeline] // wrap 00:24:36.310 [Pipeline] } 00:24:36.321 [Pipeline] // catchError 00:24:36.329 [Pipeline] stage 00:24:36.331 [Pipeline] { (Epilogue) 00:24:36.343 [Pipeline] sh 00:24:36.622 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:24:43.234 [Pipeline] catchError 00:24:43.237 [Pipeline] { 00:24:43.253 [Pipeline] sh 00:24:43.539 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:24:43.539 Artifacts sizes are good 00:24:43.548 [Pipeline] } 00:24:43.563 [Pipeline] // catchError 00:24:43.576 [Pipeline] archiveArtifacts 00:24:43.583 Archiving artifacts 00:24:43.678 [Pipeline] cleanWs 00:24:43.690 [WS-CLEANUP] Deleting project workspace... 00:24:43.690 [WS-CLEANUP] Deferred wipeout is used... 00:24:43.696 [WS-CLEANUP] done 00:24:43.699 [Pipeline] } 00:24:43.720 [Pipeline] // stage 00:24:43.725 [Pipeline] } 00:24:43.739 [Pipeline] // node 00:24:43.745 [Pipeline] End of Pipeline 00:24:43.785 Finished: SUCCESS